WO2022242127A1 - 图像特征提取方法、装置、电子设备及存储介质 - Google Patents
图像特征提取方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2022242127A1 WO2022242127A1 PCT/CN2021/137818 CN2021137818W WO2022242127A1 WO 2022242127 A1 WO2022242127 A1 WO 2022242127A1 CN 2021137818 W CN2021137818 W CN 2021137818W WO 2022242127 A1 WO2022242127 A1 WO 2022242127A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- adjusted
- feature
- sample
- parameter matrix
- Prior art date
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 193
- 238000012549 training Methods 0.000 claims abstract description 89
- 238000000034 method Methods 0.000 claims abstract description 43
- 239000011159 matrix material Substances 0.000 claims description 161
- 230000004927 fusion Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 210000002569 neuron Anatomy 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000000621 bronchi Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007917 intracranial administration Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000002685 pulmonary effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Definitions
- Embodiments of the present invention relate to the technical field of image processing, and in particular, to an image feature extraction method, device, electronic equipment, and storage medium.
- image processing technology as an effective means to obtain effective information from images, is widely used in various application scenarios. In many scenarios, it is necessary to perform feature extraction on images to capture interesting information from rich image information. With the rapid development of artificial intelligence technology, in order to improve the efficiency of image processing, various neural networks are applied to image feature extraction.
- Embodiments of the present invention provide an image feature extraction method, device, electronic equipment, and storage medium, so as to improve the ability to capture features over long distances and improve the ability to extract model image features.
- an embodiment of the present invention provides a training method for an image feature extraction model, including:
- training sample data includes sample extraction images and sample feature images corresponding to the sample extraction images
- the self-attention model is used to learn the dependency between each pixel in the sample extraction image and all pixels in the image.
- the embodiment of the present invention also provides a training device for an image feature extraction model, including:
- a sample acquisition module configured to acquire multiple sets of training sample data, wherein the training sample data includes a sample extraction image and a sample feature image corresponding to the sample extraction image;
- the model training module is used to train a pre-established self-attention model based on multiple sets of training sample data to generate an image feature extraction model
- the self-attention model is used to learn the dependency between each pixel in the sample extraction image and all pixels in the image.
- an embodiment of the present invention also provides an electronic device, the electronic device comprising:
- processors one or more processors
- the one or more processors are made to implement a method for training an image feature extraction model provided in any embodiment of the present invention.
- an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the image feature extraction model provided by any embodiment of the present invention is implemented. training method.
- multiple sets of training sample data are obtained, wherein the training sample data includes a sample extraction image and a sample feature image corresponding to the sample extraction image; based on multiple sets of training sample data, the pre-established The self-attention model is trained to generate an image feature extraction model; wherein the self-attention model is used to learn the dependence between each pixel in the sample extraction image and all pixels in the image.
- the above technical solution can effectively learn the dependency between each pixel in the sample extraction image and all the pixels in the image with the help of the self-attention model, so as to obtain a richer image to be segmented
- the global context feature improves the training accuracy of the image feature extraction model.
- FIG. 1 is a schematic flowchart of a training method for an image feature extraction model provided by Embodiment 1 of the present invention
- Embodiment 1 of the present invention is a schematic flowchart of a training method for an image feature extraction model provided by Embodiment 1 of the present invention
- FIG. 3 is a structural diagram of a self-attention model provided by Embodiment 1 of the present invention.
- Embodiment 4 is a schematic structural diagram of a training device for an image feature extraction model provided by Embodiment 1 of the present invention.
- FIG. 5 is a schematic structural diagram of an electronic device provided by Embodiment 1 of the present invention.
- Fig. 1 is a schematic flow chart of a training method for an image feature extraction model provided by Embodiment 1 of the present invention.
- This embodiment is applicable to the situation of image feature extraction through a neural network model, and the method can be provided by the embodiment of the present invention.
- the training device of the image feature extraction model can be implemented by software and/or hardware, and can be configured in the terminal and/or server to realize the training method of the image feature extraction model in the embodiment of the present invention.
- the method of this embodiment may specifically include:
- the sample extraction image may be an image from which features can be extracted, and the type and content of the sample extraction image are not specifically limited here.
- the sample extraction images include medical images and the like.
- the medical image may specifically be a clinical medical image such as a computed tomography (Computed Tomography, CT) image, a nuclear magnetic resonance (Magnetic Resonance, MR) image, a positron emission tomography (Positron Emission Tomography, PET) image, and the like.
- the sample extraction image may be a multi-dimensional intracranial blood vessel image or a pulmonary bronchus image or the like.
- the sample extraction image may be a planar image.
- the planar image may be an originally acquired planar image. It is considered that the acquired original sample extraction image may be a single-dimensional or three-dimensional or more stereoscopic image.
- the original sample extraction image may be preprocessed to obtain a planar image of the sample extraction image. For example, it may be a plane image obtained by sliced and segmented three-dimensional images.
- the sample extraction image may be a grayscale image.
- the sample feature image is a feature image corresponding to the sample extraction image, and the sample feature image may include but not limited to color features, texture features, shape features and spatial relationship features of the image.
- the training sample data may be pre-made according to the sample extraction image and the sample feature image corresponding to the sample extraction image.
- the storage location of the training sample data is not limited, can be set according to actual needs, and can be obtained directly from the corresponding storage location when necessary.
- S120. Based on multiple sets of training sample data, train the pre-established self-attention model to generate an image feature extraction model; wherein, the self-attention model is used to learn the relationship between each pixel in the sample extraction image and the image Dependencies between all pixels.
- the image feature extraction model can be obtained by training the self-attention model in advance through a large number of sample extraction images and sample feature images corresponding to the sample extraction images.
- the trained self-attention model the dependency relationship between each pixel in the image and all pixels in the image will be extracted from learning samples, and the model parameters in the self-attention model will be trained, and through continuous adjustment
- the parameters of the self-attention model make the deviation between the output result of the model and the sample feature image corresponding to the sample extraction image gradually decrease and tend to be stable, and an image feature extraction model is generated.
- model parameters of the self-attention model may adopt a random initialization principle, or may adopt a fixed value initialization principle based on experience, which is not specifically limited in this embodiment.
- the self-attention model is used to learn a dependency relationship between each pixel in the sample extraction image and all pixels in the image.
- the dependency relationship in this embodiment is the long-distance relationship between different positions in the image.
- the self-attention model extracts the long-distance relationship between pixels at different positions in the image and other pixels in the image by capturing samples, thereby obtaining a rich global Context features to improve feature extraction capabilities of the self-attention model.
- the training method of the image feature extraction model further includes: acquiring at least one target extraction image of image features to be extracted; inputting the target extraction image into the pre-trained image extraction model, and outputting the target extraction The target feature image of the image.
- the target extraction image may specifically be any image that can be feature-extracted, and the target extraction image includes a target segmented area and a non-target segmented area.
- the target segmentation area may be a user interest area.
- the target extraction image is input into the pre-trained image extraction model as input data; the image extraction model realizes the feature extraction of the target extraction image through the self-attention model, and obtains the target feature image corresponding to the target extraction image, and Extract the model output from the image as output data.
- An embodiment of the present invention provides a training method for an image feature extraction model, by acquiring multiple sets of training sample data, wherein the training sample data includes a sample extraction image and a sample feature image corresponding to the sample extraction image; based on multiple A set of training sample data is used to train the pre-established self-attention model to generate an image feature extraction model; wherein, the self-attention model is used to learn the relationship between each pixel in the sample extraction image and all pixels in the image dependencies between.
- the above technical solution can effectively learn the dependency between each pixel in the sample extraction image and all the pixels in the image with the help of the self-attention model, so as to obtain a richer image to be segmented
- the global context feature improves the training accuracy of the image feature extraction model.
- FIG. 2 is a flow chart of a training method for an image feature extraction model provided by Embodiment 2 of the present invention.
- This embodiment is based on any optional technical solution in the embodiment of the present invention.
- the Multiple sets of training sample data are used to train the pre-established self-attention model, including: inputting the sample extraction image into the pre-established self-attention model; linearly changing the sample extraction image to obtain the self-attention
- the method of the embodiment of the present invention specifically includes:
- the pre-established self-attention model may include various calculation processes, as shown in FIG. 3 , for example, calculation processes such as calculation of similarity, scaling, normalization, or feature fusion.
- the sample extraction image is used as input data, which is input into the pre-established self-attention model for calculation.
- the sample extraction image can be represented by R.
- the linear change is to use a straight line equation to perform data transformation on the sample extraction image to obtain the first to-be-adjusted parameter matrix, the second to-be-adjusted parameter matrix, and the third to-be-adjusted parameter matrix of the self-attention model.
- the purpose is to make the sample extraction image highlight the area of interest to facilitate subsequent processing.
- the first parameter matrix to be adjusted, the second parameter matrix to be adjusted and the third parameter matrix to be adjusted of the self-attention model obtained by linearly changing the extracted image based on the sample can be include:
- R represents the sample extraction image
- q represents the first parameter matrix to be adjusted
- k represents the second parameter matrix to be adjusted
- v represents the third parameter matrix to be adjusted
- W q represents the random initialization corresponding to the first parameter matrix to be adjusted matrix
- W k represents a randomly initialized matrix corresponding to the second parameter matrix to be adjusted
- W v represents a randomly initialized matrix corresponding to the third parameter matrix to be adjusted.
- the self-attention model performs random initialization on the parameter matrix to be adjusted, which can improve the calculation speed of the self-attention model and converge to the global optimum as much as possible.
- the similarity matrix is calculated through the first parameter matrix to be adjusted and the second parameter matrix to be adjusted in the sample extraction image, wherein the similarity matrix is the relationship between each position in the sample extraction image and other positions matrix.
- the determining the similarity matrix corresponding to the sample extraction image based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted includes: extracting the sample image Each pixel in is determined as the target pixel one by one; for each target pixel, the pixels between the target pixel and all pixels in the sample extraction image are calculated based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted. Similarity: constructing a similarity matrix corresponding to the sample extraction image based on the position of each target pixel in the sample extraction image and the similarity of each pixel.
- the pixel information can include the position information of each pixel in the sample extraction image and the similarity of each pixel, and construct a similarity matrix corresponding to the sample extraction image, so that Learn the dependency between each pixel in the sample extraction image and all other pixels, and obtain the global context information of the sample extraction image.
- the calculation of the target pixel and all the pixels in the sample extraction image based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted are respectively
- the pixel similarity between can be achieved by the following formula:
- (i, j) represents the position of the i-th row and j-column of the sample extraction image
- ⁇ (i, j) represents the similarity at the position of the i-th row and j-column in the similarity matrix
- q represents the first The parameter matrix to be adjusted
- k represents the second parameter matrix to be adjusted
- q (i, n) represents the element of row i and column n in the first parameter matrix q to be adjusted
- t (n, j) represents the nth element in matrix t
- the matrix t is the transpose of the second parameter matrix k to be adjusted
- d represents the dimension of the second parameter matrix k to be adjusted
- c represents the number of channels of the input image.
- the spatial position of the pixel points of the sample extraction image in the new image can be changed through the zoom operation, so that the calculation of the pixel similarity can have a stable gradient.
- the pixel similarity of the sample extraction image it can be The dependency relationship between the current pixel point and other pixel points of the current image is obtained, thereby improving the ability to capture the long-distance dependency relationship of the image.
- the third parameter matrix to be adjusted weights the similarity matrix, specifically, the third parameter matrix to be adjusted is multiplied by the similarity matrix as a weight matrix to obtain a weighted feature image.
- the weighting the similarity matrix based on the third parameter matrix to be adjusted to obtain a weighted feature image may include:
- the normalized similarity matrix is weighted based on the third parameter matrix to be adjusted to obtain a weighted feature image.
- weighting the normalized similarity matrix based on the third parameter matrix to be adjusted is specifically implemented based on the following calculation formula:
- A(q,k,v) (i,j) represents the weighted eigenvalue of the i-th row and j-th column obtained by the weighted feature image A through the matrix q, k and v, and v represents the third parameter matrix to be adjusted
- H 0 represents the target output length of the sample feature map
- W 0 represents the target output width of the sample feature map
- ⁇ ′ represents the normalized similarity matrix
- ⁇ ′ (i,n) represents the normalized similarity
- v (n, j) represents the element in row n and column j in the third parameter matrix v to be adjusted.
- the embodiment of the present invention normalizes the similarity matrix, then weights the normalized similarity matrix through the third parameter matrix to be adjusted, and calculates the weighted eigenvalue of the current pixel, thereby improving the efficiency of the sample coded image.
- the reliability of the extracted features is obtained, and a more effective weighted feature image is obtained.
- S260 Determine an output feature image based on at least two weighted feature images and the sample feature image.
- At least two weighted feature images can be fused, and the purpose of feature enhancement can be achieved by utilizing image features of multiple weighted feature images. Then use the fused image of at least two weighted feature images and the sample feature image to determine the output feature image, and calculate the output feature image and the sample image, so that the deviation between the output feature image of the model and the sample feature image corresponding to the sample extraction image gradually Decrease and tend to be stable, generate image feature extraction model.
- the determining the output feature image based on at least two weighted feature images and the sample feature image may include: fusing at least two weighted feature images to obtain a fused feature image; The feature dimension of the fusion feature image is adjusted to the target feature dimension, and the fusion feature image adjusted to the target feature dimension is added to the sample extraction image to obtain the target dimension image; the target dimension image is input into at least one of the self-attention models The fully connected layer is used to obtain the output dimension image; the output dimension image is adjusted to the feature dimension of the fusion feature image to obtain the output feature image.
- the target feature dimension can be understood as the number of channels of the target feature, for example, one channel is one-dimensional, two channels are two-dimensional, and n channels are n-dimensional.
- the fused feature image A' is obtained:
- A' A 1 +A 2 +...+A n
- n is the channel number of the weighted feature image.
- the self-attention model includes two fully connected layers; the target dimension image is input into at least one fully connected layer of the self-attention model to obtain the output dimension images, which can include:
- S represents the output dimension image
- dense represents the fully connected layer
- the activation function of the fully connected layer is a linear rectification function (Rectified Linear Unit, ReLU)
- conv represents the convolutional layer, which is used to unify the feature dimension.
- the self-attention model includes two fully connected layers, each neuron in the fully connected layer is fully connected to all neurons in the previous layer, and the fully connected layer can integrate the class-discriminative neurons in the convolutional layer. local information.
- the activation function of each neuron in the fully connected layer generally adopts a linear rectification function.
- An embodiment of the present invention provides a training method for an image feature extraction model, by acquiring multiple sets of training sample data, wherein the training sample data includes a sample extraction image and a sample feature image corresponding to the sample extraction image; based on multiple A set of training sample data is used to train the pre-established self-attention model to generate an image feature extraction model; wherein, the self-attention model is used to learn the relationship between each pixel in the sample extraction image and all pixels in the image dependencies between.
- the above technical solution can effectively learn the dependency between each pixel in the sample extraction image and all the pixels in the image with the help of the self-attention model, so as to obtain a richer image to be segmented
- the global context feature improves the training accuracy of the image feature extraction model.
- Fig. 3 is a schematic structural diagram of a training device for an image feature extraction model provided in Embodiment 3 of the present invention.
- the training device for an image feature extraction model provided in this embodiment can be realized by software and/or hardware, and can be configured in a terminal and/or the server to implement the image feature extraction model training method in the embodiment of the present invention.
- the device may specifically include: a sample acquisition module 310 and a model training module 320 .
- the sample acquisition module 310 is used to acquire multiple sets of training sample data, wherein the training sample data includes a sample extraction image and a sample feature image corresponding to the sample extraction image;
- the model training module 320 is used to obtain multiple sets of training sample data based on Training sample data, training a pre-established self-attention model to generate an image feature extraction model; wherein, the self-attention model is used to learn the relationship between each pixel in the sample extraction image and all pixels in the image dependencies.
- An embodiment of the present invention provides a training device for an image feature extraction model, by acquiring multiple sets of training sample data, wherein the training sample data includes sample extraction images and sample feature images corresponding to the sample extraction images; based on multiple A set of training sample data is used to train the pre-established self-attention model to generate an image feature extraction model; wherein, the self-attention model is used to learn the relationship between each pixel in the sample extraction image and all pixels in the image dependencies between.
- the above technical solution can effectively learn the dependency between each pixel in the sample extraction image and all the pixels in the image with the help of the self-attention model, so as to obtain a richer image to be segmented
- the global context feature improves the training accuracy of the image feature extraction model.
- the model training module 320 may include:
- a sample input unit configured to input the sample extraction image into a pre-established self-attention model
- An image linear change unit configured to linearly change the extracted image based on the sample to obtain the first parameter matrix to be adjusted, the second parameter matrix to be adjusted, and the third parameter matrix to be adjusted of the self-attention model;
- a similarity matrix determining unit configured to determine a similarity matrix corresponding to the sample extraction image based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted;
- a matrix weighting unit configured to weight the similarity matrix based on the third parameter matrix to be adjusted to obtain a weighted feature image
- An image output unit configured to determine an output feature image based on at least two weighted feature images and the sample feature image.
- the image linear change unit may be used for:
- R represents the sample extraction image
- q represents the first parameter matrix to be adjusted
- k represents the second parameter matrix to be adjusted
- v represents the third parameter matrix to be adjusted
- W q represents the random initialization corresponding to the first parameter matrix to be adjusted matrix
- W k represents a randomly initialized matrix corresponding to the second parameter matrix to be adjusted
- W v represents a randomly initialized matrix corresponding to the third parameter matrix to be adjusted.
- the similarity matrix determination unit may include:
- a target pixel determination subunit configured to determine each pixel in the sample extraction image as a target pixel one by one
- the pixel similarity calculation subunit is used to calculate, for each of the target pixel points, based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted, the relationship between the target pixel point and the sample extraction image Pixel similarity between all pixels;
- the similarity matrix construction subunit is configured to construct a similarity matrix corresponding to the sample extraction image based on the position of each target pixel in the sample extraction image and the similarity of each pixel.
- the pixel similarity calculation subunit can also be used for:
- (i, j) represents the position of the i-th row and j-column of the sample extraction image
- ⁇ (i, j) represents the similarity at the position of the i-th row and j-column in the similarity matrix
- q represents the first The parameter matrix to be adjusted
- k represents the second parameter matrix to be adjusted
- q (i, n) represents the element of row i and column n in the first parameter matrix q to be adjusted
- t (n, j) represents the nth element in matrix t
- the matrix t is the transpose of the second parameter matrix k to be adjusted
- d represents the dimension of the second parameter matrix k to be adjusted
- c represents the number of channels of the input image.
- the matrix weighting unit can be specifically used for:
- the normalized similarity matrix is weighted to obtain a weighted feature image, which is specifically implemented based on the following calculation formula:
- A(q,k,v) (i,j) represents the weighted eigenvalue of the i-th row and j-th column obtained by the weighted feature image A through the matrix q, k and v, and v represents the third parameter matrix to be adjusted
- H 0 represents the target output length of the sample feature map
- W 0 represents the target output width of the sample feature map
- ⁇ ′ represents the normalized similarity matrix
- ⁇ ′ (i,n) represents the normalized similarity
- v (n, j) represents the element in row n and column j in the third parameter matrix v to be adjusted.
- the image output unit may include:
- the image fusion subunit is used to fuse at least two weighted feature images to obtain a fusion feature image
- the target dimension image generation subunit is used to adjust the feature dimension of the fusion feature image to the target feature dimension, and add the fusion feature image adjusted to the target feature dimension to the sample extraction image to obtain the target dimension image;
- the output dimension image generation subunit is used to input the target dimension image into at least one fully connected layer of the self-attention model to obtain the output dimension image;
- the output feature image generation subunit is configured to adjust the output dimension image to the feature dimension of the fusion feature image to obtain an output feature image.
- the self-attention model includes two fully connected layers;
- the output dimension image generating subunit can be specifically used for:
- S represents the output dimension image
- dense represents the fully connected layer
- the activation function of the fully connected layer is a linear rectification function
- conv represents the convolutional layer, which is used to unify the feature dimension.
- the training device for the image feature extraction model may also include:
- a target extraction image acquisition module configured to acquire at least one target extraction image of image features to be extracted
- the target feature image output module is configured to input the target extraction image into the pre-trained image extraction model, and output the target feature image of the target extraction image.
- the above-mentioned image feature extraction model training device can execute the image feature extraction model training method provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the image feature extraction model training method.
- FIG. 5 is a schematic structural diagram of an electronic device provided by Embodiment 4 of the present invention.
- FIG. 5 shows a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention.
- the electronic device 12 shown in FIG. 5 is only an example, and should not limit the functions and scope of use of this embodiment of the present invention.
- electronic device 12 takes the form of a general-purpose computing device.
- Components of electronic device 12 may include, but are not limited to, one or more processors or processing units 16, system memory 28, bus 18 connecting various system components including system memory 28 and processing unit 16.
- Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures. Examples of these architectures include, but are not limited to, the Industry Standard Architecture (ISA) bus, the Micro Channel Architecture (MAC) bus, the Enhanced ISA bus, the Video Electronics Standards Association (VESA) local bus, and the Peripheral Component Interconnect ( PCI) bus.
- ISA Industry Standard Architecture
- MAC Micro Channel Architecture
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interconnect
- Electronic device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 12 and include both volatile and nonvolatile media, removable and non-removable media.
- System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32 .
- the electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
- storage system 34 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive”).
- a disk drive for reading and writing to removable nonvolatile disks e.g., "floppy disks”
- removable nonvolatile optical disks e.g., CD-ROM, DVD-ROM or other optical media
- each drive may be connected to bus 18 via one or more data media interfaces.
- System memory 28 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present invention.
- Program/utility 40 may be stored, for example, in system memory 28 as a set (at least one) of program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of these examples may include the implementation of the network environment.
- Program modules 42 generally perform the functions and/or methodologies of the described embodiments of the invention.
- the electronic device 12 may also communicate with one or more external devices 14 (e.g., a keyboard, pointing device, display 24, etc.), may also communicate with one or more devices that enable a user to interact with the electronic device 12, and/or communicate with Any device (eg, network card, modem, etc.) that enables the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 22 .
- the electronic device 12 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 20 . As shown in FIG. 5 , network adapter 20 communicates with other modules of electronic device 12 via bus 18 .
- the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28 , for example, implementing a training method for an image feature extraction model provided by the embodiment of the present invention.
- Embodiment 5 of the present invention also provides a storage medium containing computer-executable instructions, the computer-executable instructions are used to execute a training method for an image feature extraction model when executed by a computer processor, the method comprising:
- training sample data includes sample extraction images and sample feature images corresponding to the sample extraction images
- the self-attention model is used to learn the dependency between each pixel in the sample extraction image and all pixels in the image.
- the computer storage medium in the embodiments of the present invention may use any combination of one or more computer-readable media.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including - but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations of embodiments of the present invention may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, including A conventional procedural programming language - such as "C" or a similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明实施例公开了一种图像特征提取模型的训练方法、装置、设备及存储介质,其中,该方法包括:获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像特征提取模型的训练时,借助自注意力模型,能够有效学习样本提取图像中每个像素点与图像中所有像素点之间的依赖关系,从而获取更加丰富的待分割图像全局上下文特征,提升图像特征提取模型的训练的准确率。
Description
本发明实施例涉及图像处理技术领域,尤其涉及一种图像特征提取方法、装置、电子设备及存储介质。
目前,图像处理技术作为从图像中获取有效信息的有效手段,在各种应用场景中被广泛应用。在很多场景下,会需要对图像进行特征提取来从丰富的图像信息中捕捉到关注信息。随着人工智能技术的快速发展,为提高图像处理效率,各种神经网络被应用于图像特征提取。
但是,传统的利用神经网络模型进行图像特征提取的方法,由于卷积核的感受野受限,会造成模型只能学习到图像之间的短距离依赖关系,而长距离捕获特征的能力差,从而影响图像特征提取的效果。
发明内容
本发明实施例提供了一种图像特征提取方法、装置、电子设备及存储介质,以实现提升长距离捕获特征的能力,提升模型图像特征提取的能力。
第一方面,本发明实施例提供了一种图像特征提取模型的训练方法,包括:
获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;
基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;
其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。
第二方面,本发明实施例还提供了一种图像特征提取模型的训练装置,包括:
样本获取模块,用于获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;
模型训练模块,用于基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;
其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。
第三方面,本发明实施例还提供了一种电子设备,该电子设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本发明任意实施例所提供的一种图像特征提取模型的训练方法。
第四方面,本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本发明任意实施例所提供的一种图像特征提取模型的训练方法。
本发明实施例的技术方案,通过获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像特征提取模型的训练时,借助自注意力模型,能够有效学习样本提取图像中每个像素点与图像中所有像素点之间的依赖关系,从而获取更加丰富的待分割图像全局上下文特征,提升图像特征提取模型的训练的准确率。
为了更加清楚地说明本发明示例性实施例的技术方案,下面对描述实施例中所需要用到的附图做一简单介绍。显然,所介绍的附图只是本实用新型所要描述的一部分实施例的附图,而不是全部的附图,对于本领域普通技术人员,在不付出创造性劳动的前提下,还可以根据这些附图得到其他的附图。
图1为本发明实施例一所提供的一种图像特征提取模型的训练方法的流程示意图;
图2是为本发明实施例一所提供的一种图像特征提取模型的训练方法的流程示意图;
图3是为本发明实施例一所提供的一种自注意力模型的结构图;
图4是为本发明实施例一所提供的一种图像特征提取模型的训练装置的结构示意图;
图5是为本发明实施例一所提供的一种电子设备的结构示意图。
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述, 附图中仅示出了与本发明相关的部分而非全部结构。
另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部内容。在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
实施例一
图1为本发明实施例一所提供的一种图像特征提取模型的训练方法的流程示意图,本实施例可适用于通过神经网络模型进行图像特征提取的情况,该方法可以由本发明实施例提供的图像特征提取模型的训练装置来执行,该装置可以通过软件和/或硬件来实现,可配置于终端和/或服务器中来实现本发明实施例中的图像特征提取模型的训练方法。
如图1所示,本实施例的方法具体可包括:
S110、获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像。
在本发明实施例中,样本提取图像可以是能够被提取特征的图像,样本提取图像的类型和内容等在此并不做具体限定。可选地,样本提取图像包括医学图像等。典型地,医学图像具体可以是计算机断层(Computed Tomography,CT)图像、核磁共振(Magnetic Resonance,MR)图像、正电子发射计算机断层显像(Positron Emission Tomography,PET)图像等临床医学图像。示例性地,样本提取图像可以是多维颅内血管图像或肺部支气管图像等。
示例性地,样本提取图像可以是平面图像。平面图像可以是原始采集的平面图像。考虑到获取到的原始的样本提取图像可能是单维或者三维以上的立体图像的情况。当原始的样本提取图像为多维图像时,可以为原始的样本提取图像经过预处理得到样本提取图像的平面图像。例如,可以是将三维图像进行切片化分割得到的平面图像可选地,样本提取图像可以是灰度图像。样本特征图像是样本提取图像对应的特征图像,样本特征图像可以包括但不限于图像的颜色特征、纹理特征、形状特征和空间关系特征。
在本发明实施例中,训练样本数据可以是根据样本提取图像以及与样本提取图像对应的样本特征图像预先制作而成。其中,训练样本数据存储位置并不受限制,可以根据实际需求进行设置,需要时直接从相应的存储位置进行获取。S120、基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习 所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。
在本实施例中,图像特征提取模型可以预先通过大量的样本提取图像及与样本提取图像对应的样本特征图像对自注意力模型进行训练得到。在所训练的自注意力模型中,将学习样本提取图像中中每个像素点与图像中所有像素点之间的依赖关系,并对自注意力模型中的模型参数进行训练,并通过不断调整自注意力模型参数,使得模型的输出结果与样本提取图像对应的样本特征图像之间的偏差逐渐减小并趋于稳定,生成图像特征提取模型。
其中,自注意力模型的模型参数可以采用随机初始化原则,也可以根据经验采用固定值初始化原则,本实施例对此不做具体限定。通过对模型各节点的权重和偏置值进行初始化赋值,可提升模型的收敛速度和性能表现。
在本申请实施例的一个可选实施方式中,所述自注意力模型用于学习样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。具体的,本实施例依赖关系是图像中不同位置之间的长距离关系,自注意力模型通过捕获样本提取图像中不同位置像素点与其他像素点之间的长距离关系,从而获得丰富的全局上下文特征,提升自注意力模型特征提取能力。
可选的,图像特征提取模型的训练方法,还包括:获取至少一张待提取图像特征的目标提取图像;将所述目标提取图像输入至预先训练完成的图像提取模型中,输出所述目标提取图像的目标特征图像。
在本实施例中,目标提取图像具体可以是任何可被特征提取的图像,目标提取图像包括目标分割区域和非目标分割区域。其中,目标分割区域可以是用户感兴趣区域。将目标提取图像,作为输入数据输入至预先训练完成的图像提取模型中;图像提取模型通过其中自注意力模型实现对目标提取图像的进行特征提取,得到与目标提取图像对应的目标特征图像,并作为输出数据从图像提取模型输出。
本发明实施例提供了一种图像特征提取模型的训练方法,通过获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像特征提取模型的训练时,借助自注意力模型,能够有效学习样本提取图像中每个像素点与图像中所有像素点之间的依赖关系,从而获取更加丰富的待分割图像全局上下文特征,提升图像特征提取模型的训练的准确率。
实施例二
图2为本发明实施例二所提供的一种图像特征提取模型的训练方法的流程图,本实施例在本发明实施例中任一可选技术方案的基础上,可选地,所述基于多组训练样本数据,对预先建立的自注意力模型进行训练,包括:将所述样本提取图像输入至预先建立的自注意力模型中;基于所述样本提取图像进行线性变化得到所述自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵;基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本提取图像对应的相似度矩阵;基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像;基于至少两张加权特征图像和所述样本特征图像确定输出特征图像。
如图2所示,本发明实施例的方法具体包括:
S210、获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像。
S220、将所述样本提取图像输入至预先建立的自注意力模型中。
在本实施例中,预先建立的自注意力模型可以包含多种计算过程,如图3所示,例如,计算相似度、缩放、归一化或特征融合等计算过程。具体的,将样本提取图像作为输入数据,输入至预先建立的自注意力模型中进行计算。其中,样本提取图像可以用R表示。
S230、基于所述样本提取图像进行线性变化得到所述自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵。
在本实施例中,线性变化是利用直线方程对样本提取图像进行数据变换,得到自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵。目的是让样本提取图像突出自己感兴趣的区域,方便后续的处理。
在本发明实施例的一个可选实施方式中,所述基于样本提取图像进行线性变化得到自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵,可以包括:
q=W
qR
k=W
kR
v=W
vR
其中,R表示样本提取图像,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,v表示第三待调整参数矩阵,W
q表示与第一待调整参数矩阵对应的随机初始化的矩阵,W
k表示与第二待调整参数矩阵对应的随机初始化的矩阵,W
v表示与第三待调整参数矩阵对应的随机初始化的矩阵。本实施例自注意力模型通过对待调整参数矩阵进行随机初始化,可提升自注意力模型计算速度,并尽可能其收敛于全局最优。
S240、基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本提取图像对应的相似度矩阵。
在本实施例中,通过样本提取图像的第一待调整参数矩阵和第二待调整参数矩阵计算得到相似度矩阵,其中,相似度矩阵是样本提取图像中每个位置与其他位置之间关系的矩阵。
在本发明实施例的一个可选实施方式中,所述基于第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本提取图像对应的相似度矩阵,包括:将样本提取图像中每一个像素点逐个确定为目标像素点;针对每个目标像素点,基于第一待调整参数矩阵和第二待调整参数矩阵分别计算目标像素点与样本提取图像中所有像素点之间的像素相似度;基于每一个所述目标像素点在样本提取图像中的位置以及各个像素相似度构建与样本提取图像对应的相似度矩阵。
具体来说,就是获取样本提取图像的每一个像素点信息,像素点信息可以包括每个像素在样本提取图像中的位置信息及各个像素相似度,构建与样本提取图像对应的相似度矩阵,从而学习样本提取图像中每个像素点与其他所有像素点之间的依赖关系,获取样本提取图像的全局上下文信息。
在本发明实施例的一个可选实施方式中,所述基于所述第一待调整参数矩阵和所述第二待调整参数矩阵分别计算所述目标像素点与所述样本提取图像中所有像素点之间的像素相似度,具体可以通过如下公式实现:
其中,(i,j)表示样本提取图像的第i行第j列的位置,Ω
(i,j)表示相似度矩阵中位于第i行第j列的位置处的相似度,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,q
(i,n)表示第一待调整参数矩阵q中第i行第n列的元素;t
(n,j)表示矩阵t中第n行第j列的元素,矩阵t为第二待调整参数矩阵k的转置,d表示第二待调整参数矩阵k的维度,c表示输入图像的通道数。
其中,
为对样本提取图像进行缩放操作,通过缩放操作可改变样本提取图像的像素点在新图像中的空间位置,可使像素相似度计算具有稳定的梯度,通过计算样本提取图像的像素相似度,可以得到当前像素点与当前图像其它像素点之间的依赖关系,从而提高了对图像长距离依赖关系的捕获能力。
S250、基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像。
在本实施例中,第三待调整参数矩阵对相似度矩阵进行加权,具体为第三待调整参数矩 阵作为权重矩阵乘以相似度矩阵得到加权特征图像。
在本发明实施例的一个可选实施方式中,所述基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像,可以包括:
对相似度矩阵进行归一化;
基于第三待调整参数矩阵对归一化后的相似度矩阵进行加权,得到加权特征图像。
其中,基于第三待调整参数矩阵对归一化后的相似度矩阵进行加权具体基于如下计算公式实现:
其中,A(q,k,v)
(i,j)表示加权特征图像A通过矩阵q,k和v得到的第i行第j列的加权后的特征值,v表示第三待调整参数矩阵,H
0表示样本特征图的目标输出长度,W
0表示样本特征图的目标输出宽度,Ω′表示归一化后的相似度矩阵,Ω′
(i,n)表示归一化后的相似度矩阵Ω′中第i行第n列的元素,v
(n,j)表示第三待调整参数矩阵v中第n行第j列的元素。
本发明实施例通过对相似度矩阵进行归一化,然后通过第三待调整参数矩阵对归一化后的相似度矩阵进行加权,进行计算当前像素点的加权特征值,从而提高对样本编码图像提取特征的可靠性,得到更加有效的加权特征图像。
S260、基于至少两张加权特征图像和所述样本特征图像确定输出特征图像。
在本实施例中,至少两张加权特征图像可以进行融合,通过利用多张加权特征图像的图像特征,从而实现特征增强的目的。然后利用至少两张加权特征图像融合的图像和样本特征图像确定输出特征图像,通过输出特征图像与样本图像进行计算,使得模型的输出特征图像与样本提取图像对应的样本特征图像之间的偏差逐渐减小并趋于稳定,生成图像特征提取模型。
在本发明实施例的一个可选实施方式中,所述基于至少两张加权特征图像和所述样本特征图像确定输出特征图像,可以包括:将至少两张加权特征图像进行融合得到融合特征图像;将融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像与样本提取图像相加,得到目标维度图像;将目标维度图像输入所述自注意力模型的至少一个全连接层,得到输出维度图像;将输出维度图像调整为融合特征图像的特征维度得到输出特征图像。
其中,目标特征维度可以理解为目标特征的通道数,例如,一个通道是一维,两个通道 是二维,n个通道是n维。具体的,通过将多张加权特征图像进行在通道维度上进行融合,得到融合特征图像A′:
A′=A
1+A
2+…+A
n
其中,n为加权特征图像的通道数,得到A′后,将融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像C与所述样本特征图像R相加,得到目标维度图像C′。
C′=C+R
在本发明实施例的一个可选实施方式中,优选的,所述自注意力模型包括两个全连接层;将所述目标维度图像输入自注意力模型的至少一个全连接层,得到输出维度图像,可以包括:
S=conv(dense(dense(C′))+C′)
其中,S表示输出维度图像,dense表示全连接层,所述全连接层的激活函数为线性整流函数(Rectified Linear Unit,ReLU),conv表示卷积层,用于统一特征维度。本实施例中自注意力模型包括两个全连接层,全连接层中的每个神经元与其前一层的所有神经元进行全连接,全连接层可以整合卷积层中具有类别区分性的局部信息。为了提升自注意力模型性能,全连接层每个神经元的激励函数一般采用线性整流函数。
本发明实施例提供了一种图像特征提取模型的训练方法,通过获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像特征提取模型的训练时,借助自注意力模型,能够有效学习样本提取图像中每个像素点与图像中所有像素点之间的依赖关系,从而获取更加丰富的待分割图像全局上下文特征,提升图像特征提取模型的训练的准确率。
实施例三
图3为本发明实施例三提供的一种图像特征提取模型的训练装置的结构示意图,本实施例所提供的图像特征提取模型的训练装置可以通过软件和/或硬件来实现,可配置于终端和/或服务器中来实现本发明实施例中的图像特征提取模型的训练方法。该装置具体可包括:样本获取模块310及模型训练模块320。
其中,样本获取模块310,用于获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;模型训练模块320,用于基于 多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。
本发明实施例提供了一种图像特征提取模型的训练装置,通过获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像特征提取模型的训练时,借助自注意力模型,能够有效学习样本提取图像中每个像素点与图像中所有像素点之间的依赖关系,从而获取更加丰富的待分割图像全局上下文特征,提升图像特征提取模型的训练的准确率。
在本发明实施例中任一可选技术方案的基础上,可选地,所述模型训练模块320,可以包括:
样本输入单元,用于将所述样本提取图像输入至预先建立的自注意力模型中;
图像线性变化单元,用于基于所述样本提取图像进行线性变化得到所述自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵;
相似度矩阵确定单元,用于基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本提取图像对应的相似度矩阵;
矩阵加权单元,用于基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像;
图像输出单元,用于基于至少两张加权特征图像和所述样本特征图像确定输出特征图像。
在本发明实施例中任一可选技术方案的基础上,可选地,所述图像线性变化单元,可以用于:
q=W
qR
k=W
kR
v=W
vR
其中,R表示样本提取图像,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,v表示第三待调整参数矩阵,W
q表示与第一待调整参数矩阵对应的随机初始化的矩阵,W
k表示与第二待调整参数矩阵对应的随机初始化的矩阵,W
v表示与第三待调整参数矩阵对应的随机初始化的矩阵。
在本发明实施例中任一可选技术方案的基础上,可选地,相似度矩阵确定单元可以包括:
目标像素点确定子单元,用于将所述样本提取图像中每一个像素点逐个确定为目标像素点;
像素相似度计算子单元,用于针对每个所述目标像素点,基于所述第一待调整参数矩阵和所述第二待调整参数矩阵分别计算所述目标像素点与所述样本提取图像中所有像素点之间的像素相似度;
相似度矩阵构建子单元,用于基于每一个所述目标像素点在所述样本提取图像中所处的位置以及各个所述像素相似度构建与所述样本提取图像对应的相似度矩阵。
在本发明实施例中任一可选技术方案的基础上,可选地,像素相似度计算子单元还可以用于:
其中,(i,j)表示样本提取图像的第i行第j列的位置,Ω
(i,j)表示相似度矩阵中位于第i行第j列的位置处的相似度,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,q
(i,n)表示第一待调整参数矩阵q中第i行第n列的元素;t
(n,j)表示矩阵t中第n行第j列的元素,矩阵t为第二待调整参数矩阵k的转置,d表示第二待调整参数矩阵k的维度,c表示输入图像的通道数。
在本发明实施例中任一可选技术方案的基础上,可选地,所述矩阵加权单元,具体可用于:
对所述相似度矩阵进行归一化;
基于所述第三待调整参数矩阵对归一化后的相似度矩阵进行加权,得到加权特征图像,具体基于如下计算公式实现:
其中,A(q,k,v)
(i,j)表示加权特征图像A通过矩阵q,k和v得到的第i行第j列的加权后的特征值,v表示第三待调整参数矩阵,H
0表示样本特征图的目标输出长度,W
0表示样本特征图的目标输出宽度,Ω′表示归一化后的相似度矩阵,Ω′
(i,n)表示归一化后的相似度矩阵Ω′中第i行第n列的元素,v
(n,j)表示第三待调整参数矩阵v中第n行第j列的元素。
在本发明实施例中任一可选技术方案的基础上,可选地,所述图像输出单元,可以包括:
图像融合子单元,用于将至少两张加权特征图像进行融合得到融合特征图像;
目标维度图像生成子单元,用于将所述融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像与所述样本提取图像相加,得到目标维度图像;
输出维度图像生成子单元,用于将所述目标维度图像输入所述自注意力模型的至少一个全连接层,得到输出维度图像;
输出特征图像生成子单元,用于将所述输出维度图像调整为所述融合特征图像的特征维度得到输出特征图像。
在本发明实施例中任一可选技术方案的基础上,可选地,所述自注意力模型包括两个全连接层;
所述输出维度图像生成子单元具体可用于:
S=conv(dense(dense(C′))+C′)
其中,S表示输出维度图像,dense表示全连接层,所述全连接层的激活函数为线性整流函数,conv表示卷积层,用于统一特征维度。
在本发明实施例中任一可选技术方案的基础上,可选地,所述图像特征提取模型的训练装置,还可以包括:
目标提取图像获取模块,用于获取至少一张待提取图像特征的目标提取图像;
目标特征图像输出模块,用于将所述目标提取图像输入至预先训练完成的图像提取模型中,输出所述目标提取图像的目标特征图像。
上述图像特征提取模型的训练装置可执行本发明任意实施例所提供的图像特征提取模型的训练方法,具备执行图像特征提取模型的训练方法相应的功能模块和有益效果。
实施例四
图5为本发明实施例四所提供的一种电子设备的结构示意图。图5示出了适于用来实现本发明实施方式的示例性电子设备12的框图。图5显示的电子设备12仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图5所示,电子设备12以通用计算设备的形式表现。电子设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC) 总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
电子设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)30和/或高速缓存存储器32。电子设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图5未显示,通常称为“硬盘驱动器”)。尽管图5中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。系统存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如系统存储器28中,这样的程序模块42包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本发明所描述的实施例中的功能和/或方法。
电子设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该电子设备12交互的设备通信,和/或与使得该电子设备12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,电子设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图5所示,网络适配器20通过总线18与电子设备12的其它模块通信。应当明白,尽管图5中未示出,可以结合电子设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本发实施例所提供的一种图像特征提取模型的训练方法。
实施例五
本发明实施例五还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令 在由计算机处理器执行时用于执行一种图像特征提取模型的训练方法,该方法包括:
获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;
基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;
其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。
本发明实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明实施例操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。
Claims (12)
- 一种图像特征提取模型的训练方法,其特征在于,包括:获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。
- 根据权利要求1所述的方法,其特征在于,所述基于多组训练样本数据,对预先建立的自注意力模型进行训练,包括:将所述样本提取图像输入至预先建立的自注意力模型中;基于所述样本提取图像进行线性变化得到所述自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵;基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本提取图像对应的相似度矩阵;基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像;基于至少两张加权特征图像和所述样本特征图像确定输出特征图像。
- 根据权利要求2所述的方法,其特征在于,所述基于所述样本提取图像进行线性变化得到所述自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵,包括:q=W qRk=W kRv=W vR其中,R表示样本提取图像,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,v表示第三待调整参数矩阵,W q表示与第一待调整参数矩阵对应的随机初始化的矩阵,W k表示与第二待调整参数矩阵对应的随机初始化的矩阵,W v表示与第三待调整参数矩阵对应的随机初始化的矩阵。
- 根据权利要求2所述的方法,其特征在于,所述基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本提取图像对应的相似度矩阵,包括:将所述样本提取图像中每一个像素点逐个确定为目标像素点;针对每个所述目标像素点,基于所述第一待调整参数矩阵和所述第二待调整参数矩阵分别计算所述目标像素点与所述样本提取图像中所有像素点之间的像素相似度;基于每一个所述目标像素点在所述样本提取图像中所处的位置以及各个所述像素相似度构建与所述样本提取图像对应的相似度矩阵。
- 根据权利要求5所述的方法,其特征在于,所述基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像,包括:对所述相似度矩阵进行归一化;基于所述第三待调整参数矩阵对归一化后的相似度矩阵进行加权,得到加权特征图像,具体基于如下计算公式实现:其中,A(q,k,v) (i,j)表示加权特征图像A通过矩阵q,k和v得到的第i行第j列的加权后的特征值,v表示第三待调整参数矩阵,H 0表示样本特征图的目标输出长度,W 0表示样本特征图的目标输出宽度,Ω′表示归一化后的相似度矩阵,Ω′ (i,n)表示归一化后的相似度矩阵Ω′中第i行第n列的元素,v (n,j)表示第三待调整参数矩阵v中第n行第j列的元素。
- 根据权利要求2所述的方法,其特征在于,所述基于至少两张加权特征图像和所述样本特征图像确定输出特征图像,包括:将至少两张加权特征图像进行融合得到融合特征图像;将所述融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像与所述样本提取图像相加,得到目标维度图像;将所述目标维度图像输入所述自注意力模型的至少一个全连接层,得到输出维度图像;将所述输出维度图像调整为所述融合特征图像的特征维度得到输出特征图像。
- 根据权利要求7所述的方法,其特征在于,所述自注意力模型包括两个全连接层;所述将所述目标维度图像输入所述自注意力模型的至少一个全连接层,得到输出维度图像,包括:S=conv(dense(dense(C′))+C′)其中,S表示输出维度图像,dense表示全连接层,所述全连接层的激活函数为线性整流函数,conv表示卷积层,用于统一特征维度。
- 根据权利要求1所述的方法,其特征在于,还包括:获取至少一张待提取图像特征的目标提取图像;将所述目标提取图像输入至预先训练完成的图像提取模型中,输出所述目标提取图像的目标特征图像。
- 一种图像特征提取模型的训练装置,其特征在于,包括:样本获取模块,用于获取多组训练样本数据,其中,所述训练样本数据包括样本提取图像以及与所述样本提取图像对应的样本特征图像;模型训练模块,用于基于多组训练样本数据,对预先建立的自注意力模型进行训练,生成图像特征提取模型;其中,所述自注意力模型用于学习所述样本提取图像中每个像素点与图像中所有像素点之间的依赖关系。
- 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-9中任一所述的一种图像特征提取模型的训练方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现如权利要求1-9中任一所述的一种图像特征提取模型的训练方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110560452.5A CN113326851B (zh) | 2021-05-21 | 2021-05-21 | 图像特征提取方法、装置、电子设备及存储介质 |
CN202110560452.5 | 2021-05-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022242127A1 true WO2022242127A1 (zh) | 2022-11-24 |
Family
ID=77416335
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/137818 WO2022242127A1 (zh) | 2021-05-21 | 2021-12-14 | 图像特征提取方法、装置、电子设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113326851B (zh) |
WO (1) | WO2022242127A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094895A (zh) * | 2023-09-05 | 2023-11-21 | 杭州一隅千象科技有限公司 | 图像全景拼接方法及其系统 |
CN118168514A (zh) * | 2024-05-14 | 2024-06-11 | 南京苏测测绘科技有限公司 | 一种智能算法成像的水下断面测绘系统及方法 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113326851B (zh) * | 2021-05-21 | 2023-10-27 | 中国科学院深圳先进技术研究院 | 图像特征提取方法、装置、电子设备及存储介质 |
CN114913402B (zh) * | 2022-07-18 | 2022-10-18 | 深圳比特微电子科技有限公司 | 一种深度学习模型的融合方法、装置 |
CN118051765B (zh) * | 2024-04-16 | 2024-07-05 | 天津光电通信技术有限公司 | 噪声全局特征提取方法、装置、服务器及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111667495A (zh) * | 2020-06-08 | 2020-09-15 | 北京环境特性研究所 | 一种图像场景解析方法和装置 |
KR102278756B1 (ko) * | 2020-03-11 | 2021-07-16 | 연세대학교 산학협력단 | 일관성을 고려한 스테레오 영상의 업스케일 장치 및 방법 |
CN113159056A (zh) * | 2021-05-21 | 2021-07-23 | 中国科学院深圳先进技术研究院 | 图像分割方法、装置、设备及存储介质 |
CN113326851A (zh) * | 2021-05-21 | 2021-08-31 | 中国科学院深圳先进技术研究院 | 图像特征提取方法、装置、电子设备及存储介质 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292887B (zh) * | 2017-06-20 | 2020-07-03 | 电子科技大学 | 一种基于深度学习自适应权重的视网膜血管分割方法 |
CN109829894B (zh) * | 2019-01-09 | 2022-04-26 | 平安科技(深圳)有限公司 | 分割模型训练方法、oct图像分割方法、装置、设备及介质 |
CN109872306B (zh) * | 2019-01-28 | 2021-01-08 | 腾讯科技(深圳)有限公司 | 医学图像分割方法、装置和存储介质 |
CN110378913B (zh) * | 2019-07-18 | 2023-04-11 | 深圳先进技术研究院 | 图像分割方法、装置、设备及存储介质 |
WO2021031066A1 (zh) * | 2019-08-19 | 2021-02-25 | 中国科学院深圳先进技术研究院 | 一种软骨图像分割方法、装置、可读存储介质及终端设备 |
CN110781956A (zh) * | 2019-10-24 | 2020-02-11 | 精硕科技(北京)股份有限公司 | 一种目标检测的方法、装置、电子设备及可读存储介质 |
US11763433B2 (en) * | 2019-11-14 | 2023-09-19 | Samsung Electronics Co., Ltd. | Depth image generation method and device |
CN111242217A (zh) * | 2020-01-13 | 2020-06-05 | 支付宝实验室(新加坡)有限公司 | 图像识别模型的训练方法、装置、电子设备及存储介质 |
CN111429464B (zh) * | 2020-03-11 | 2023-04-25 | 深圳先进技术研究院 | 医学图像分割方法、医学图像分割装置及终端设备 |
CN111612790B (zh) * | 2020-04-29 | 2023-10-17 | 杭州电子科技大学 | 一种基于t型注意力结构的医学图像分割方法 |
CN111951281B (zh) * | 2020-08-10 | 2023-11-28 | 中国科学院深圳先进技术研究院 | 图像分割方法、装置、设备及存储介质 |
CN111951280B (zh) * | 2020-08-10 | 2022-03-15 | 中国科学院深圳先进技术研究院 | 图像分割方法、装置、设备及存储介质 |
CN112017191B (zh) * | 2020-08-12 | 2023-08-22 | 西北大学 | 基于注意力机制的肝脏病理图像分割模型建立及分割方法 |
CN112001931A (zh) * | 2020-08-24 | 2020-11-27 | 上海眼控科技股份有限公司 | 图像分割方法、装置、设备及存储介质 |
CN112309540B (zh) * | 2020-10-28 | 2024-05-14 | 中国科学院深圳先进技术研究院 | 运动评估方法、装置、系统及存储介质 |
CN112700462B (zh) * | 2020-12-31 | 2024-09-17 | 北京迈格威科技有限公司 | 一种图像分割方法、装置、电子设备及存储介质 |
CN112419321B (zh) * | 2021-01-25 | 2021-04-02 | 长沙理工大学 | X射线图像识别方法、装置、计算机设备及存储介质 |
CN112633419B (zh) * | 2021-03-09 | 2021-07-06 | 浙江宇视科技有限公司 | 小样本学习方法、装置、电子设备和存储介质 |
-
2021
- 2021-05-21 CN CN202110560452.5A patent/CN113326851B/zh active Active
- 2021-12-14 WO PCT/CN2021/137818 patent/WO2022242127A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102278756B1 (ko) * | 2020-03-11 | 2021-07-16 | 연세대학교 산학협력단 | 일관성을 고려한 스테레오 영상의 업스케일 장치 및 방법 |
CN111667495A (zh) * | 2020-06-08 | 2020-09-15 | 北京环境特性研究所 | 一种图像场景解析方法和装置 |
CN113159056A (zh) * | 2021-05-21 | 2021-07-23 | 中国科学院深圳先进技术研究院 | 图像分割方法、装置、设备及存储介质 |
CN113326851A (zh) * | 2021-05-21 | 2021-08-31 | 中国科学院深圳先进技术研究院 | 图像特征提取方法、装置、电子设备及存储介质 |
Non-Patent Citations (2)
Title |
---|
ANONYMOUS: "CV AttentionNon-Local neural networks", UNDERSTANDING AND IMPLEMENTATION OF NON-LOCAL NEURAL NETWORKS, 5 January 2020 (2020-01-05), pages 1 - 5, XP093009214, Retrieved from the Internet <URL:https://www.cnblogs.com/pprp/p/12153255.html> [retrieved on 20220210] * |
WANG, XIAOLONG ET AL.: "Non-local Neural Networks", CVPR 2018, 22 June 2018 (2018-06-22), pages 7794 - 7803, XP093000642, DOI: 10.1109/CVPR.2018.00813 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094895A (zh) * | 2023-09-05 | 2023-11-21 | 杭州一隅千象科技有限公司 | 图像全景拼接方法及其系统 |
CN117094895B (zh) * | 2023-09-05 | 2024-03-26 | 杭州一隅千象科技有限公司 | 图像全景拼接方法及其系统 |
CN118168514A (zh) * | 2024-05-14 | 2024-06-11 | 南京苏测测绘科技有限公司 | 一种智能算法成像的水下断面测绘系统及方法 |
Also Published As
Publication number | Publication date |
---|---|
CN113326851B (zh) | 2023-10-27 |
CN113326851A (zh) | 2021-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022242127A1 (zh) | 图像特征提取方法、装置、电子设备及存储介质 | |
WO2022242131A1 (zh) | 图像分割方法、装置、设备及存储介质 | |
KR102663519B1 (ko) | 교차 도메인 이미지 변환 기법 | |
CN111797893B (zh) | 一种神经网络的训练方法、图像分类系统及相关设备 | |
US12100192B2 (en) | Method, apparatus, and electronic device for training place recognition model | |
CN111046125A (zh) | 一种视觉定位方法、系统及计算机可读存储介质 | |
US11960570B2 (en) | Learning contrastive representation for semantic correspondence | |
WO2021098534A1 (zh) | 相似度确定、网络训练、查找方法及装置、电子装置和存储介质 | |
JP2023549070A (ja) | 意味特徴の学習を介したUnseenドメインからの顔認識 | |
CN111242952B (zh) | 图像分割模型训练方法、图像分割方法、装置及计算设备 | |
CN116129141B (zh) | 医学数据处理方法、装置、设备、介质和计算机程序产品 | |
WO2021190433A1 (zh) | 更新物体识别模型的方法和装置 | |
WO2023109361A1 (zh) | 用于视频处理的方法、系统、设备、介质和产品 | |
CN116434033A (zh) | 面向rgb-d图像稠密预测任务的跨模态对比学习方法及系统 | |
CN116597260A (zh) | 图像处理方法、电子设备、存储介质及计算机程序产品 | |
Kalash et al. | Relative saliency and ranking: Models, metrics, data and benchmarks | |
Qin et al. | Depth estimation by parameter transfer with a lightweight model for single still images | |
Luo et al. | A Review of Homography Estimation: Advances and Challenges | |
Wang et al. | Swimmer’s posture recognition and correction method based on embedded depth image skeleton tracking | |
Xie et al. | Visual robot relocalization based on multi-task CNN and image-similarity strategy | |
CN111915676B (zh) | 图像生成方法、装置、计算机设备和存储介质 | |
CN111582449B (zh) | 一种目标域检测网络的训练方法、装置、设备及存储介质 | |
US20240169567A1 (en) | Depth edges refinement for sparsely supervised monocular depth estimation | |
US20240054394A1 (en) | Generating new data based on class-specific uncertainty information using machine learning | |
CN112862840B (zh) | 图像分割方法、装置、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21940570 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21940570 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21940570 Country of ref document: EP Kind code of ref document: A1 |