CN116188392A - Image processing method, computer-readable storage medium, and computer terminal - Google Patents
Image processing method, computer-readable storage medium, and computer terminal Download PDFInfo
- Publication number
- CN116188392A CN116188392A CN202211731783.1A CN202211731783A CN116188392A CN 116188392 A CN116188392 A CN 116188392A CN 202211731783 A CN202211731783 A CN 202211731783A CN 116188392 A CN116188392 A CN 116188392A
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- features
- attention
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 159
- 239000013598 vector Substances 0.000 claims abstract description 136
- 210000000056 organ Anatomy 0.000 claims abstract description 70
- 238000000034 method Methods 0.000 claims abstract description 54
- 230000011218 segmentation Effects 0.000 claims description 116
- 238000000605 extraction Methods 0.000 claims description 22
- 230000004044 response Effects 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 19
- 230000003190 augmentative effect Effects 0.000 claims description 13
- 238000003709 image segmentation Methods 0.000 abstract description 11
- 238000009826 distribution Methods 0.000 description 39
- 230000002159 abnormal effect Effects 0.000 description 32
- 208000016030 Oculootodental syndrome Diseases 0.000 description 29
- 238000010586 diagram Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 19
- 230000008569 process Effects 0.000 description 14
- 230000005540 biological transmission Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 230000004807 localization Effects 0.000 description 6
- 210000001508 eye Anatomy 0.000 description 5
- 210000003128 head Anatomy 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 3
- 238000007621 cluster analysis Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Computer Hardware Design (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The application discloses an image processing method, a computer readable storage medium and a computer terminal. Can be used in the fields of image recognition and image segmentation. Wherein the method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, identifying the image to be processed to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that the pixel points in the image to be processed meet the preset condition. The method and the device solve the technical problem that the accuracy of image processing is low in the related art.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, a computer-readable storage medium, and a computer terminal.
Background
Currently, in some fields, images are long-tail distributed, that is, images may include extremely complex long-tail objects, where long-tail objects are used to represent the types of pixels that are rare in the images, and in some image recognition tasks, the types of pixels that are rare are important to study, and in the current image processing method, when processing images with long-tail distribution, there is an out-of-distribution situation, that is, when the images with long-tail distribution are processed, the image processing effect in real world application is limited due to the out-of-distribution long-tail objects being significantly reduced, and thus, the accuracy of image processing is low.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides an image processing method, a computer readable storage medium and a computer terminal, which are used for at least solving the technical problem of lower accuracy of image processing in the related art.
According to an aspect of an embodiment of the present application, there is provided an image processing method including: acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, identifying the image to be processed to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
According to another aspect of the embodiments of the present application, there is also provided an image processing method, including: displaying an image to be processed on the operation interface in response to an input instruction acting on the operation interface, wherein the image to be processed contains an image of a region of at least one organ of the biological object; and responding to an image processing instruction acting on an operation interface, and displaying a target recognition result of the image to be processed on the operation interface, wherein the target recognition result is used for representing the probability that pixel points in the image to be processed meet a preset condition, the target recognition result is obtained by recognizing the image to be processed based on a first image feature and a plurality of attention features of the part image, the plurality of attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed.
According to another aspect of the embodiments of the present application, there is also provided an image processing method, including: displaying an image to be processed on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the image to be processed contains part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and driving the VR device or the AR device to render the target recognition result.
According to another aspect of the embodiments of the present application, there is also provided an image processing method, including: acquiring an image to be processed by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the image to be processed, and the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and outputting a target identification result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target identification result.
According to another aspect of the embodiments of the present application, there is also provided an image processing apparatus including: the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, wherein the image to be processed contains part images of at least one organ of a biological object; the extraction module is used for extracting the characteristics of the image to be processed to obtain the first image characteristics of the part image; the processing module is used for carrying out cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; the recognition module is used for recognizing the image to be processed based on the first image feature and the plurality of attention features to obtain a target recognition result of the image to be processed, wherein the target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
According to another aspect of the embodiments of the present application, there is also provided an image processing apparatus including: the first display module is used for responding to an input instruction acted on the operation interface and displaying an image to be processed on the operation interface, wherein the image to be processed comprises part images of at least one organ of the biological object; the second display module is used for responding to an image processing instruction acting on the operation interface, displaying a target identification result of the image to be processed on the operation interface, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet a preset condition, the target identification result is obtained by identifying the image to be processed based on a first image feature and a plurality of attention features of the part image, the attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed.
According to another aspect of the embodiments of the present application, there is also provided an image processing apparatus including: the display module is used for displaying an image to be processed on a display picture of the virtual reality VR device or the augmented reality AR device, wherein the image to be processed contains part images of at least one organ of the biological object; the extraction module is used for extracting the characteristics of the image to be processed to obtain the first image characteristics of the part image; the processing module is used for carrying out cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; the recognition module is used for recognizing the image to be processed based on the first image feature and the plurality of attention features to obtain a target recognition result of the image to be processed, wherein the target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition; and the driving module is used for driving the VR equipment or the AR equipment to render the target recognition result.
According to another aspect of the embodiments of the present application, there is also provided an image processing apparatus including: the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed by calling a first interface, the first interface comprises a first parameter, the parameter value of the first parameter is the image to be processed, and the image to be processed contains part images of at least one organ of a biological object; the extraction module is used for extracting the characteristics of the image to be processed to obtain the first image characteristics of the part image; the processing module is used for carrying out cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; the recognition module is used for recognizing the image to be processed based on the first image feature and the plurality of attention features to obtain a target recognition result of the image to be processed, wherein the target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition; and the output module is used for outputting a target recognition result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target recognition result.
According to another aspect of the embodiments of the present application, there is also provided a computer readable storage medium, where the computer readable storage medium includes a stored program, and when the program runs, the apparatus on which the computer readable storage medium is controlled to execute the method of any one of the above steps.
According to another aspect of the embodiments of the present application, there is also provided a computer terminal, including: a processor; and the memory is connected with the processor and is used for providing instructions for executing the method of any one of the above steps for the processor.
Acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; performing cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; and identifying the image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that the pixel points in the image to be processed meet the preset condition, so that the processing accuracy of the image to be processed is improved. It is easy to note that the image to be processed can be identified based on the first image feature and the plurality of attention features, the object exceeding the distribution in the image to be processed can be positioned, and the abnormal condition in the image to be processed can be accurately determined, so that the processing accuracy of the image to be processed is improved, and the technical problem of lower accuracy of image processing in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a schematic diagram of a hardware environment of a virtual reality device according to an image processing method according to an embodiment of the present application;
FIG. 2 is a block diagram of a computing environment for an image processing method according to an embodiment of the present application;
fig. 3 is a flowchart of an image processing method according to embodiment 1 of the present application;
FIG. 4 is a schematic diagram of a target recognition result according to an embodiment of the present application;
FIG. 5 is a schematic illustration of an image processing process according to an embodiment of the present application;
fig. 6 is a flowchart of an image processing method according to embodiment 2 of the present application;
fig. 7 is a flowchart of an image processing method according to embodiment 3 of the present application;
fig. 8 is a flowchart of an image processing method according to embodiment 4 of the present application;
fig. 9 is a schematic diagram of an image processing apparatus according to embodiment 5 of the present application;
fig. 10 is a schematic view of an image processing apparatus according to embodiment 6 of the present application;
Fig. 11 is a schematic view of an image processing apparatus according to embodiment 7 of the present application;
fig. 12 is a schematic view of an image processing apparatus according to embodiment 8 of the present application;
fig. 13 is a block diagram of a computer terminal according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present application, there is also provided an image processing method, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a schematic diagram of a hardware environment of a virtual reality device according to an image processing method according to an embodiment of the present application. As shown in fig. 1, the virtual reality device 104 is connected to the terminal 106, the terminal 106 is connected to the server 102 via a network, and the virtual reality device 104 is not limited to: the terminal 104 is not limited to a PC, a mobile phone, a tablet computer, etc., and the server 102 may be a server corresponding to a media file operator, and the network includes, but is not limited to: a wide area network, a metropolitan area network, or a local area network.
Optionally, the virtual reality device 104 of this embodiment includes: memory, processor, and transmission means. The memory is used to store an application program that can be used to perform: acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, the image to be processed is identified, and a target identification result of the image to be processed is obtained, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions, so that the technical problem of lower accuracy of image processing in related technologies is solved.
The terminal of this embodiment may be configured to perform a presentation of an image to be processed on a presentation screen of a Virtual Reality (VR) device or an augmented Reality (Augmented Reality, AR) device, wherein the image to be processed contains a site image of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and driving the VR device or the AR device to render the target recognition result, and sending the image to be recognized to the virtual reality device 104, wherein the virtual reality device 104 displays the image to be recognized at the target delivery position after receiving the image to be recognized.
Optionally, the HMD (Head Mount Display, head mounted display) head display and the eye tracking module of the virtual reality device 104 of this embodiment have the same functions as those of the above embodiment, that is, a screen in the HMD head display is used for displaying a real-time picture, and the eye tracking module in the HMD is used for acquiring a real-time motion track of an eyeball of a user. The terminal of the embodiment obtains the position information and the motion information of the user in the real three-dimensional space through the tracking system, and calculates the three-dimensional coordinates of the head of the user in the virtual three-dimensional space and the visual field orientation of the user in the virtual three-dimensional space.
The hardware architecture block diagram shown in fig. 1 may be used not only as an exemplary block diagram for an AR/VR device (or mobile device) as described above, but also as an exemplary block diagram for a server as described above, and in an alternative embodiment, fig. 2 shows in block diagram form one embodiment of a computing node in a computing environment 201 using an AR/VR device (or mobile device) as described above in fig. 1. Fig. 2 is a block diagram of a computing environment for an image processing method according to an embodiment of the present application, as shown in fig. 2, where the computing environment 201 includes a plurality of computing nodes (e.g., servers) running on a distributed network (shown as 210-1, 210-2, …). Each computing node contains local processing and memory resources and end user 202 may run applications or store data remotely in computing environment 201. An application may be provided as a plurality of services 220-1,220-2,220-3 and 220-4 in computing environment 301, representing services "A", "D", "E", and "H", respectively.
Services 220 are provided or deployed in accordance with various virtualization techniques supported by computing environment 201. In some embodiments, the service 220 may be provided according to Virtual Machine (VM) based virtualization, container based virtualization, and/or the like. Virtual machine-based virtualization may be the emulation of a real computer by initializing a virtual machine, executing programs and applications without directly touching any real hardware resources. While the virtual machine virtualizes the machine, according to container-based virtualization, a container may be started to virtualize the entire Operating System (OS) so that multiple workloads may run on a single Operating System instance.
In one embodiment based on container virtualization, several containers of service 220 may be assembled into one Pod (e.g., kubernetes Pod). For example, as shown in FIG. 2, the service 220-2 may be equipped with one or more Pods 240-1, 240-2, …,240-N (collectively Pod 240). Each Pod 240 may include an agent 245 and one or more containers 242-1, 242-2, …,242-M (collectively containers 242). One or more containers 242 in Pod 240 handle requests related to one or more corresponding functions of the service, and agents 245 generally control network functions related to the service, such as routing, load balancing, and the like. Other services 220 may also be Pod similar to Pod 240.
In operation, executing a user request from end user 202 may require invoking one or more services 220 in computing environment 201, and executing one or more functions of one service 220 may require invoking one or more functions of another service 220. As shown in FIG. 2, service "A"220-1 receives a user request of end user 202 from ingress gateway 230, service "A"220-1 may invoke service "D"220-2, and service "D"220-2 may request service "E"220-3 to perform one or more functions.
The computing environment may be a cloud computing environment, and the allocation of resources is managed by a cloud service provider, allowing the development of functions without considering the implementation, adjustment or expansion of the server. The computing environment allows developers to execute code that responds to events without building or maintaining a complex infrastructure. Instead of expanding a single hardware device to handle the potential load, the service may be partitioned to a set of functions that can be automatically scaled independently.
In the above-described operating environment, the present application provides an image processing method as shown in fig. 3. It should be noted that, the image processing method of this embodiment may be performed by the mobile terminal of the embodiment shown in fig. 1. Fig. 3 is a flowchart of an image processing method according to embodiment 1 of the present application. As shown in fig. 3, the method may include the steps of:
Step S302, a to-be-processed image is acquired.
Wherein the image to be processed comprises an image of a region of at least one organ of the biological object.
The image to be processed may be an image needing to pay attention to abnormal conditions or local details, and an image belonging to long tail distribution, namely an image belonging to class imbalance, wherein rare objects generally appear in the image of long tail distribution, so that the recognition is difficult. In the related art, the image to be processed may be a CT (computed tomography) scan image.
The biological object may be a human, animal or other organ-containing object. The above-mentioned organ may be an organ in the living body or an organ outside the living body, which is not limited to a specific type of organ. The site image may be an image focused on an organ in the biological object, may include one organ requiring image processing, and may include a plurality of organs requiring image processing, and is not limited herein.
In an alternative embodiment, the image of the part of at least one organ in the biological object may be acquired by the imaging device, and the image of the part of at least one organ in the biological object may also be acquired from the network, so as to obtain the image to be processed, and the specific manner of acquiring the image to be processed may be determined according to the actual situation.
Step S304, extracting features of the image to be processed to obtain first image features of the part image.
In an alternative embodiment, feature extraction may be performed on the image to be processed to obtain a first image feature for the image of the organ site in the biological object, where the first image feature is used to represent a feature of the organ in the biological object. For other areas except the area where the part image is located in the image to be processed, the corresponding image features are not required to be obtained. The first image feature herein is a pixel-by-pixel feature, which requires the specification.
Step S306, performing cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image.
The above-mentioned plurality of query vectors (object queries) may be feature vectors set in advance for feature classes of different pixels, and may serve as a cluster center for distinguishing feature classes, for guiding classification of the first image feature. The query vector may be a feature vector characterizing a background in the region image or a feature vector characterizing an organ in the region image. It should be noted that the plurality of query vectors are not fixed, but may learn the adjusted feature vectors continuously.
In an alternative embodiment, the cross-attention processing can be performed on the first image feature and the plurality of query vectors by using a decoder module in the transducer model, so that feature vectors (i.e. the plurality of attention features described above) capable of characterizing the context information of the part image can be effectively and efficiently obtained, optionally, for each pixel point, the context information of all pixels can be collected on the cross path of the pixel by cross-attention, and through further cyclic operation, each pixel can finally obtain the dependency relationship of the whole graph.
Through the cross attention processing of the first image feature and the plurality of query vectors, a plurality of attention features needing attention can be obtained on the basis of considering the global, so that the calculation amount of subsequent image recognition is reduced, and the accuracy of the subsequent image recognition can be higher by classifying the first image feature through the query vectors.
Step S308, based on the first image feature and the plurality of attention features, the image to be processed is identified, and a target identification result of the image to be processed is obtained.
The target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
The target recognition result is used for representing abnormal distribution conditions of pixels in the image to be processed, and an abnormal region can be obtained through positioning through the abnormal distribution conditions, namely, less classified objects can be positioned through an abnormal distribution diagram. Alternatively, the distribution of the abnormal area of the pixel may be represented by an abnormal score or an abnormal distribution map, and may be represented by other means, which is not limited herein.
The preset condition is used for representing the pixel points containing the long-tail object in the image to be processed, and the probability of containing the long-tail object in the image to be processed can be determined through a plurality of similarities between the first image characteristic and a plurality of attention characteristics. The probability of containing a long-tail object is larger if the maximum similarity among the plurality of similarities is smaller, and the probability of containing a long-tail object is smaller if the maximum similarity among the plurality of similarities is larger.
In an alternative embodiment, a region of a pixel point with low feature similarity can be located according to a first image feature and an attention feature corresponding to each pixel point in the image to be processed, so that an abnormal pixel point distribution condition of the image to be processed is obtained, a long tail object in the image to be processed is represented through the abnormal pixel point distribution condition, missing of the long tail object in the image to be processed can be avoided, and processing accuracy of the image to be processed can be improved.
Acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, the image to be processed is identified, and a target identification result of the image to be processed is obtained, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions, and the processing accuracy of the image to be processed is improved. It is easy to note that the image to be processed can be identified based on the first image feature and the plurality of query vectors, so that the object exceeding the distribution in the image to be processed is positioned, the abnormal condition in the image to be processed is accurately determined, the processing accuracy of the image to be processed is improved, and the technical problem that the accuracy of image processing in the related art is lower is solved.
In the above embodiment of the present application, based on the first image feature and the plurality of attention features, the identifying the image to be processed to obtain the target identifying result of the image to be processed includes: determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; obtaining the maximum similarity in the multiple similarities; and obtaining the opposite number of the maximum similarity to obtain the target recognition result.
In an alternative embodiment, given an image, a pixel-level query response may be generated by a mask processor (Mask Transformers), where the query response may be expressed as a correlation of a plurality of attention features and a cluster center, the maximum query response for a pixel is expressed as a similarity between the pixel and its assigned cluster center, and the maximum query response for an outlier is relatively small, i.e., the similarity of the outlier to the cluster center is relatively small, so the inverse of the maximum similarity may be employed in the formula as an outlier score for the pixel level, called MaxQuery, i.e.:
wherein R is R K×HWD Representing a query response matrix, A.epsilon.R K×HWD And representing an anomaly score corresponding to the query response, wherein N represents the maximum operation on the query dimension.
Fig. 4 is a schematic diagram of a target recognition result according to an embodiment of the present application, where three clusters are included, three dots respectively represent cluster centers of different clusters, pixels in the clusters are represented by squares, pixels outside the clusters are represented by triangles, dashed arrows are used to represent vector lengths from pixels in the clusters to the cluster centers, solid arrows are used to represent vector lengths from pixels outside the clusters to the cluster centers, which reflect distances between the pixels and the cluster centers of the clusters, and the maximum vector length of pixels in the clusters is generally smaller than the maximum vector length of pixels outside the clusters, so that, in the above manner, pixels exceeding the distribution in the image to be processed can be obtained.
Detection and localization of out-of-distribution (out of distribution, abbreviated OOD) for detecting out-of-distribution conditions, i.e., outliers, that are not seen in training data, the maximum classification probability (Maximal softmax probability, abbreviated MSP) can be used as a strong baseline, after which various methods improve OOD detection in various aspects and also strive to localize OOD objects or regions on larger images, e.g., in urban driving scenarios. Landscape OOD detection and natural image localization have advanced, but their application in images of real scenes remains challenging. Because of the subtle differences between the foreground in real world related field images, their OOD detection or localization can be a typical near OOD problem.
In an alternative embodiment, the anomaly distribution may be further normalized to [0,1] by min-max normalization, fig. 4 illustrates the feasibility of the max vector in terms of OOD pixel identification, the negative sign added in this application being due to the less likely one pixel to be an OOD pixel when its maximum query response, i.e., maximum similarity, is greater.
In addition, the abnormal scoring results can be compared in the application, and the abnormal scoring results can be determined according to the maximum value R (pre-softmax, A= -max of query response N R) and cluster group M (post-softmax, a' = -max N M) comparing the results of the outlier distributions, wherein the accuracy of the results of a is much better than a' because the pixels within a cluster distribution can be uniformly close to multiple cluster centers, the maximum score in M can be very low, easily misclassified as outlier pixels, but for the maximum query response R, the score is still high enough to represent the outlier pixels in the cluster, and therefore the maximum query response can be selected to represent the outlier region. For the manner of cluster allocation, although the situation of misclassification occurs, the pixels can be semantically segmented according to the requirement.
In the above embodiment of the present application, the method further includes: determining target types corresponding to the plurality of attention features; determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; based on a plurality of similarities and target types, performing semantic segmentation on pixels in the image to be processed to obtain a target semantic segmentation result, wherein the target semantic segmentation result is used for representing the category of the pixel points belonging to the part image in the image to be processed.
The above-described object type may be determined based on the pixel class included in the actual processing task.
In an alternative embodiment, the target types corresponding to the attention features may be determined, and the first image features with larger similarity to the attention features in the multiple similarities may be classified by the target types so as to obtain the category of the pixel corresponding to the first image feature, so that the semantic segmentation of the pixel in the image to be processed is realized, and the target semantic segmentation result is obtained.
In the above embodiment of the present application, based on a plurality of similarities and target types, performing semantic segmentation on pixels in an image to be processed to obtain a semantic segmentation result, including: grouping pixels in an image to be processed based on a plurality of similarities to obtain a first pixel set; and classifying the first pixel set based on the target type to obtain a semantic segmentation result.
In an alternative embodiment, the pixels in the image to be processed may be grouped according to a plurality of similarities to obtain a first pixel set, so that the objects represented by the pixels in different categories are represented by the group form, and the category may be assigned to the first pixel set where the pixel point with the greater similarity is located by the target type corresponding to the plurality of attention features, so as to obtain a semantic segmentation result, that is, obtain the category of the organ in the part image in the image to be processed.
In the above embodiment of the present application, performing cross attention processing on a first image feature and a plurality of query vectors to obtain a plurality of attention features, including: and performing cross attention processing on the first image feature and the plurality of query vectors by using the decoder model to obtain a plurality of attention features.
The decoder model may be a Mask transformer, where the Mask transformer is mainly configured to cluster the first image features through a plurality of query vectors, so as to obtain a clustering result, so that the plurality of query vectors are updated according to the clustering result, and a plurality of attention features are obtained.
In the above embodiment of the present application, performing cross attention processing on the first image feature and the plurality of query vectors by using the decoder model to obtain a plurality of attention features, including: clustering the first image features based on a plurality of query vectors to obtain a clustering result; and updating the plurality of query vectors based on the clustering result to obtain a plurality of attention features.
In an alternative embodiment, the first image features may be guided to be clustered by a plurality of query vectors, so as to obtain a clustered result, and the categories guided by the plurality of query vectors are activated according to the clustered result, so as to obtain a plurality of attention features.
In the above embodiment of the present application, the method further includes: obtaining a training sample, wherein the training sample comprises: a sample image and a preset semantic segmentation result, wherein the sample image comprises a preset image of a preset organ of a preset biological object; extracting features of the sample image to obtain second image features of the preset image; performing cross attention processing on the second image feature and the plurality of query vectors by using the decoder model to obtain a plurality of sample attention features; based on the second image feature and the plurality of sample attention features, identifying the sample image to obtain a sample processing result of the sample image, wherein the sample processing result comprises: the sample semantic segmentation result is used for representing pixel points belonging to a preset image in the sample image, and the pixel sets in the second pixel set respectively comprise pixel points of different preset types in the sample image; determining a total loss value of the decoder model based on the sample semantic segmentation result, the preset semantic segmentation result and the second pixel set; model parameters of the decoder model are adjusted based on the total loss value.
The training samples may be images belonging to the same field as the image to be processed, or may be images of other fields, which is not limited herein.
In an alternative embodiment, the training sample is processed to obtain a sample semantic segmentation result and a second pixel set, where the sample semantic segmentation result is used to represent a distribution situation of abnormal pixel points in the training sample, and the second pixel set is used to represent a grouping situation of pixels in the image to be processed.
At present, conventional segmentation loss is an important learning target of a model, and cross entropy loss between an output value and an actual value can be used for model training. However, when only conventional segmentation loss is used, the object query mainly focuses on the background and the organ rather than the abnormal region, the significant difference between the foreground and the background greatly disperses the focus of the model on the subtle difference between the OOD object and the internal object, some queries have mixed representation on the background and the foreground, which is a less good phenomenon for discriminating cluster learning, therefore, the present application proposes to manipulate the object through query distribution loss to perform query and guide the object to focus on the foreground, especially the abnormal region, and encourages concentrated cluster learning, the key step of which is to use the grouping condition of the real cluster supervision clusters,
The N channels can be divided into three groups including N 1 、N 2 、N 3 Three channels can be used to represent background, organ and abnormal areas (e.g., tumor areas), respectively, M N can be used in this application 1 G of channel and background class 1 Associated, N can be 1 G of channels and organs 2 Associated, N can be finally 3 Channels and abnormal regionsIn association, the pixel set +.>And Classification tag->The combination is carried out as follows:
wherein, after combinationRepresenting the probability distribution of each spatial position in three categories of background, organ and abnormal region, i.e +.>
Query distribution loss can be expressed asAnd->Likelihood loss of negative log between, specifically as follows: />
The segmentation loss function can be constructed according to the sample semantic segmentation result and the preset semantic segmentation result, so that the segmentation capability of the decoder model is improved through the segmentation loss function.
The method is mainly used for carrying out cluster allocation on different types of pixels so as to determine a strict limit, and based on the cluster allocation, the final loss function is a combination of a segmentation loss function, a query distribution loss function and a weight, and is specifically expressed as follows:
wherein,,representing the final loss function, +. >Representing query distributionLoss function (F)>The lambda can be adjusted by itself according to the actual requirement.
In the above embodiment of the present application, based on the second image feature and the plurality of sample attention features, identifying the sample image to obtain the second pixel set includes: determining the similarity between the second image feature and the plurality of sample attention features to obtain a plurality of sample similarities; grouping pixels in the sample image based on the plurality of sample similarities to obtain an initial pixel set; and merging the initial pixel sets based on the preset category to obtain a second pixel set.
The above-mentioned preset categories may be categories requiring attention, and may be classified into, for example, a background, an organ, and an abnormal region.
In an alternative embodiment, the similarity between the second image feature and the plurality of sample attention features may be determined, so as to obtain a plurality of sample similarities, pixels with larger similarity in the sample image may be grouped according to the plurality of sample similarities, so as to obtain an initial pixel set, and the initial pixel set may be combined according to the type of the object to be detected, so as to obtain a second pixel set for representing different types.
In the above embodiment of the present application, determining the total loss value of the decoder model based on the sample semantic segmentation result, the preset semantic segmentation result and the second pixel set includes: determining a first loss value of the decoder model based on the sample semantic segmentation result and a preset semantic segmentation result; combining the preset semantic segmentation results based on preset categories corresponding to the preset semantic segmentation results to obtain a preset pixel set; determining a second loss value of the decoder model based on the second set of pixels and the preset set of pixels; and obtaining a weighted sum of the first loss value and the second loss value to obtain a total loss value.
The first loss value described above is used to represent the loss of the decoder model to the abnormal region in the sample image.
The second loss value described above is used to represent the loss of the decoder model for object recognition.
In an alternative embodiment, the ability to identify an abnormal region may be improved by increasing the weight of the first loss value or decreasing the weight of the second loss value for tasks that are primarily concerned with abnormal regions, and the ability to identify a class of pixels in an image may be improved by decreasing the weight of the first loss value or increasing the weight of the second loss value for tasks that are primarily concerned with the class of objects.
In the related art, image segmentation aims at segmenting an image into a plurality of regions corresponding to an object representing interest, and can be focused on a three-dimensional image X E R of a certain field H×W×D And may be divided into K class labels using a segmentation model as follows:
wherein G is i ∈{0,1} H×W×D A true mask representing the i-th class,in this problem, class 1 refers to belonging to the background, class 2 refers to specific organs, and the others are abnormal parts, such as tumors, and since the image in the real scene is distributed in long tails on the dataset, its segmentation task should be classified as supervised pixel segmentation or pixel-level OOD localization.
FIG. 5 is a schematic diagram of an image processing process according to an embodiment of the present application, as shown in FIG. 5, a CNN (Convolutional Neural Network, simply referred to as convolutional neural network) skeleton may be used to build a model to extract the features P εR of each pixel H×W×D And a transducer that can incrementally update the query vectors of a set of learnable objects. The cross-attention process between the first image feature and the plurality of query vectors is as follows:
wherein the superscripts c and p represent the query vector and the first image feature of the pixel, Q c Can target word matrix, K p May be keyword matrix, V p May be a vector corresponding to the first image feature, and Q is calculated by argmax () c And K p Can be based on the maximum similarity to V p And clustering, namely updating a plurality of query vectors C according to a clustering result, wherein a subscript N indicates that the largest operation is performed on query dimensions, so as to obtain a plurality of attention features, and a similar panoramic segmentation model (KMax-deep Lab) in the application adopts the largest element operation of the clustering dimensions to replace the operation of a space dimension activation function (Softmax) in an original cross attention mechanism.
According to the cluster analysis method of the mask transformer, semantic segmentation can be regarded as a two-stage cluster analysis process, firstly, all pixels can be distributed into different cluster clusters, and mask embedding vectors of the feature processing module can be formulated as cluster centers, as shown in fig. 3, and P T May be a first image feature, C may be a plurality of query vectors, C and P T The product R of (a) may represent the query response, i.e., the attention features, which may represent the similarity between each pixel and the cluster center, such that a mask prediction is generated on the query response R using the activation function of the query multi-class classification problem, to encourage mutual exclusivity of the cluster classification, where the plurality of attention features may be:
M=softmax(R)=softmax(C×PT),
Notably, unlike threshold function (sigmoid) activation, which directly uses a neural network, query vector softmax activation can better guide an object query (cluster center) to different regions of an image of interest in order to achieve diversity of image segmentation in a real scene image.
The grouped pixels can be classified under the guidance of cluster classification, and a cluster center C can be evaluated by a multi-layer persistence (MLP) so as to predict the cluster category C of the K channel in N clusters K ∈R N×K ThenThe following cluster allocation M-grouped pixels and their classifications C are summarized K For semantic segmentation, the formula is as follows:
z=(cK) T ×m,
wherein Z is E R K×HWD Representing the final logarithm, the classical segmentation loss and the query distribution loss between Z and R of the final output may be combined in order to guarantee the final segmentation.
In order to further divide the unseen abnormal region in the image, a procedure is needed for OOD positioning when reasoning the test image, and a test image X εR can be given K×H×D OOD locates the evaluation query vector to find the maximum response that represents the similarity between the pixel and its assigned cluster center, and then the model can generate a pixel-level anomaly score map, A ε [0,1 ] H×W×D Wherein A is i =1,A i =0 means that the i-th pixel in X belongs to the class in OOD and distribution, respectively.
The neural network backbone for image segmentation in fig. 5 (a), where the training framework (nnUNet) can be employed, (b) the decoder interactively updates the query vector to accommodate the internal cluster center, (c) a two-stage cluster analysis is included, where the first stage is cluster assignment, where pixels can be grouped according to the relevance between pixel features and cluster center, to construct a query distribution penalty from the grouped set of pixels and the actual set of pixels, and the second stage is cluster classification, where the grouped pixels are guided to generate segmentation penalty, i.e., the segmentation penalty is actually constructed by segmentation output and segmentation, and where the overall segmentation is supervised by classical segmentation penalty and query distribution penalty.
The implementation background of the scheme is as follows:
current image segmentation in the field of parts has extremely complex long-tailed objects, where the tail conditions are related to relatively rare types and are of clinical significance. Current algorithms may prove their effectiveness in tail conditions to avoid the clinically dangerous lesions that occur in these OODs. In this application, the concept of object queries in Mask Transformers may be employed to assign semantic segments to cluster groupings, the queries fitting feature-level cluster centers of pixels during training. Thus, when reasoning about images in a real-world scenario, the detection can be checked to locate the OOD regions based on the similarity between pixels, which can be localized to MaxQuery in general. Furthermore, the foreground of the real world image, whether it is an OOD object or a pixel, is part of an organ. The difference between them is smaller than the difference between the foreground and the background, which may mislead the object query to pay excessive attention to the background. Therefore, a query-distribution (QD) loss is proposed in the present application, so as to force a clear boundary between a segmentation target and other regions according to a query level, improve pixel segmentation and OOD.
Image segmentation is a fundamental task in image analysis, with recent advances in computer vision and deep learning, automated image segmentation has achieved better performance in a variety of applications, but most image segmentation approaches are based on supervised learning, relying heavily on collecting and annotating training data.
However, the actual image is long-tailed, and the tail condition is abnormal value, which is insufficient for training a reliable model; moreover, the pixel-trained model may trigger a failure or error risk in a real clinical deployment, for example, in analysis of an organ image, a missing fine part may not accurately semantically segment the organ image. Thus, the segmentation model should improve the ability to demonstrate and detect OOD conditions. While current research has made valuable attempts at OOD localization, in addition to normal or simulated OOD conditions for model verification, clinical scenes in real scenes are more complex, and it is difficult to establish direct relationships between image pixels and excessive semantics for image segmentation in real scenes, and it is more challenging to use such relationships to distinguish outliers.
In this application, according to Mask Transformers, segmentation can be split into two-stage transition of each pixel cluster allocation and cluster classification, and a set of well-defined pixel point sets can greatly help to identify the OOD condition from the image, so the application proposes Max Query, which is an image semantic segmentation framework, and can push Mask Transformers to locate the OOD target. The framework may employ a learnable object query to iteratively adapt to cohesive centers, and since the similarity between OODs and the cluster centers within the distribution (inlier) is less than the similarity between the inlier and the cluster centers within the cluster, maxQuery uses the negative of this similarity as an indicator to monitor OODs.
The contributions of this application are mainly as follows:
the maximum similarity of the query vectors can be used as a main index of OOD positioning in the method.
The loss of query distribution presented in this application to focus queries on important foreground regions may prove effective against near OOD problems.
In the application, two image data sets are constructed for realizing semantic segmentation or monitoring of OOD in a real scene.
The framework presented in this application is much better than previous OOD localization methods and improves the performance of pixel segmentation.
The related work achieved by the scheme of the application is as follows:
the semantic segmentation can be used for detecting places needing attention in images, so that it is important to develop a reliable segmentation method, a visual feature processing network (Vision Transformers, simply referred to as a ViTs) can integrate subsequent feature processing blocks (transformers) into a backbone of a network architecture, the ViTs have good performance on multiple types of semantic segmentation tasks, the problems of the multiple types of semantic segmentation tasks are mainly concentrated on exploring the positioning detection of real scene OOD in image segmentation, and the performance provided by the prior art solution is limited, so that a novel architecture combining the transformers and the nnunes is researched in the application, so that the segmentation performance of image processing is improved, and the application is widely used.
Mask Transformers unlike network backbones that use transformations directly as image segmentation, mask Transformers mainly uses independent feature processing blocks to augment neural network-based backbones, panorama segmentation (MaX-deep) interprets object queries in a visual feature processing module (detection transformer, abbreviated as DETR) as memory-coded queries for end-to-end panorama segmentation, mask framework (maskfrome) is intuitively adapted to semantic segmentation of images through unified convolutional neural network and transformation designs, which requires that the network have local sensitivity to the image texture of the organ segmentation, and can be globally understood for identifying the morphology of the organ.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
Example 2
According to an embodiment of the present application, there is also provided an image processing method, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 6 is a flowchart of another image processing method according to embodiment 2 of the present application. As shown in fig. 6, the method may include the steps of:
in step S602, in response to an input instruction acting on the operation interface, an image to be processed is displayed on the operation interface.
Wherein the image to be processed comprises an image of a region of at least one organ of the biological object;
the operation interface may be a display interface capable of being operated by a user, and the input instruction may be an instruction for confirming an image to be processed.
In step S604, in response to the image processing instruction acting on the operation interface, the target recognition result of the image to be processed is displayed on the operation interface.
The target recognition result is obtained by recognizing the image to be processed based on the first image feature and a plurality of attention features of the part image, wherein the attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed.
The image processing instruction can process the image to be processed according to the image processing instruction generated by performing related operation when the image is required to be processed, and a target recognition result of the image to be processed is obtained.
By the steps, responding to an input instruction acted on an operation interface, displaying an image to be processed on the operation interface, wherein the image to be processed comprises part images of at least one organ of a biological object; and responding to an image processing instruction acting on the operation interface, and displaying a target recognition result of the image to be processed on the operation interface, wherein the target recognition result is used for representing the probability that pixel points in the image to be processed meet a preset condition, the target recognition result is obtained by recognizing the image to be processed based on a first image feature and a plurality of attention features of the part image, the plurality of attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed, so that the processing accuracy of the image to be processed is improved. It is easy to note that the image to be processed can be identified based on the first image feature and the plurality of attention features, the object exceeding the distribution in the image to be processed can be positioned, and the abnormal condition in the image to be processed can be accurately determined, so that the processing accuracy of the image to be processed is improved, and the technical problem of lower accuracy of image processing in the related art is solved.
It should be noted that, the preferred embodiments in the foregoing examples of the present application are the same as the embodiments provided in example 1, the application scenario and the implementation process, but are not limited to the embodiments provided in example 1.
Example 3
There is also provided, in accordance with an embodiment of the present application, an image processing method applicable to virtual reality scenes such as virtual reality VR devices, augmented reality AR devices, etc., it being noted that the steps illustrated in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order different from that herein.
Fig. 7 is a flowchart of an image processing method according to embodiment 3 of the present application. As shown in fig. 7, the method may include the steps of:
step S702, displaying the image to be processed on a presentation screen of the virtual reality VR device or the augmented reality AR device.
Wherein the image to be processed comprises an image of a region of at least one organ of the biological object.
Step S704, extracting features of the image to be processed to obtain a first image feature of the part image.
Step S706, performing cross attention processing on the first image feature and the plurality of query vectors to obtain a plurality of attention features.
The different query vectors are used for representing feature categories of different pixel points in the part image.
Step S708, based on the first image feature and the plurality of attention features, identifies the image to be processed, and obtains a target identification result of the image to be processed.
The target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
Step S710, driving the VR device or the AR device to render the target recognition result.
Through the steps, the image to be processed is displayed on a display picture of the virtual reality VR device or the augmented reality AR device, wherein the image to be processed contains part images of at least one organ of the biological object; extracting features of the image to be processed to obtain first image features of the part image; performing cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying the image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and driving the VR equipment or the AR equipment to render the target recognition result, so that the processing accuracy of the image to be processed is improved. It is easy to note that the image to be processed can be identified based on the first image feature and the plurality of attention features, the object exceeding the distribution in the image to be processed can be positioned, and the abnormal condition in the image to be processed can be accurately determined, so that the processing accuracy of the image to be processed is improved, and the technical problem of lower accuracy of image processing in the related art is solved.
Alternatively, in the present embodiment, the above-described image processing method may be applied to a hardware environment constituted by a server, a virtual reality device. The image to be processed is shown on a presentation screen of the virtual reality VR device or the augmented reality AR device, and the server may be a server corresponding to a media file operator, where the network includes but is not limited to: the virtual reality device is not limited to a wide area network, a metropolitan area network, or a local area network: virtual reality helmets, virtual reality glasses, virtual reality all-in-one machines, and the like.
Optionally, the virtual reality device comprises: memory, processor, and transmission means. The memory is used to store an application program that can be used to perform: displaying an image to be processed on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the image to be processed contains part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and driving the VR device or the AR device to render the target recognition result.
Alternatively, the processor of this embodiment may call the application program stored in the memory through the transmission device to perform the above steps. The transmission device can receive the media file sent by the server through the network and can also be used for data transmission between the processor and the memory.
Optionally, in the virtual reality device, a head-mounted display with eye tracking is provided, a screen in the head-mounted display of the HMD is used for displaying a video picture displayed, an eye tracking module in the HMD is used for acquiring real-time motion tracks of eyes of the user, a tracking system is used for tracking position information and motion information of the user in a real three-dimensional space, a calculation processing unit is used for acquiring real-time position and motion information of the user from the tracking system, and calculating three-dimensional coordinates of the head of the user in the virtual three-dimensional space, visual field orientation of the user in the virtual three-dimensional space and the like.
In this embodiment of the present application, the virtual reality device may be connected to a terminal, where the terminal and the server are connected through a network, and the virtual reality device is not limited to: the terminal is not limited to a PC, a mobile phone, a tablet PC, etc., and the server may be a server corresponding to a media file operator, and the network includes but is not limited to: a wide area network, a metropolitan area network, or a local area network.
It should be noted that, the preferred embodiments in the foregoing examples of the present application are the same as the embodiments provided in example 1, the application scenario and the implementation process, but are not limited to the embodiments provided in example 1.
Example 4
According to an embodiment of the present application, there is also provided an image processing method, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 8 is a flowchart of an image processing method according to embodiment 4 of the present application. As shown in fig. 8, the method may include the steps of:
step S802, a to-be-processed image is acquired by calling a first interface.
The first interface comprises a first parameter, wherein the parameter value of the first parameter is an image to be processed, and the image to be processed comprises part images of at least one organ of the biological object.
The first interface may be an interface where the client is connected to the server, and the client may upload the image to be processed to the server through the first interface.
Step S804, extracting features of the image to be processed to obtain a first image feature of the part image.
In step S806, cross attention processing is performed on the first image feature and the plurality of query vectors, so as to obtain a plurality of attention features.
The different query vectors are used for representing feature categories of different pixel points in the part image.
Step S808, based on the first image feature and the plurality of attention features, identifying the image to be processed, and obtaining a target identification result of the image to be processed.
The target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
Step S810, outputting a target identification result by calling the second interface.
The second interface comprises a second parameter, and the parameter value of the second parameter is a target identification result.
The second interface may be an interface where the client is connected to the server, and the server may return the target identification result to the client through the second interface.
Through the steps, an image to be processed is obtained by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the image to be processed, and the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; performing cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying the image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and outputting the target recognition result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target recognition result, so that the processing accuracy of the image to be processed is improved. It is easy to note that the image to be processed can be identified based on the first image feature and the plurality of attention features, the object exceeding the distribution in the image to be processed can be positioned, and the abnormal condition in the image to be processed can be accurately determined, so that the processing accuracy of the image to be processed is improved, and the technical problem of lower accuracy of image processing in the related art is solved.
It should be noted that, the preferred embodiments in the foregoing examples of the present application are the same as the embodiments provided in example 1, the application scenario and the implementation process, but are not limited to the embodiments provided in example 1.
Example 5
There is further provided, according to an embodiment of the present application, an image processing apparatus for implementing the above image processing method, and fig. 9 is a schematic diagram of an image processing apparatus according to embodiment 5 of the present application, as shown in fig. 9, and the apparatus 900 includes: an acquisition module 902, an extraction module 904, a processing module 906, and an identification module 908.
The acquisition module is used for acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; the extraction module is used for extracting features of the image to be processed to obtain first image features of the part image; the processing module is used for carrying out cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; the recognition module is used for recognizing the image to be processed based on the first image feature and the plurality of attention features to obtain a target recognition result of the image to be processed, wherein the target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
Here, it should be noted that the above-mentioned acquisition module, extraction module, processing module and identification module correspond to step S302 to step S308 in embodiment 1, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1 above. It should be noted that the above modules may be run as part of the apparatus in the AR/VR device provided in embodiment 1.
In this embodiment of the application, the identification module includes: the device comprises a first determining unit, a first acquiring unit and a second acquiring unit.
The first determining unit is used for determining similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; the first acquisition unit is used for acquiring the maximum similarity in the multiple similarities; the second obtaining unit is used for obtaining the opposite number of the maximum similarity to obtain the target recognition result.
In this embodiment of the present application, the apparatus further includes: the device comprises a first determining module, a second determining module and a semantic segmentation module.
The first determining module is used for determining target types corresponding to the plurality of attention features; the second determining module is used for determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; the semantic segmentation module is used for carrying out semantic segmentation on pixels in the image to be processed based on a plurality of similarities and target types to obtain a target semantic segmentation result, wherein the target semantic segmentation result is used for representing the category of the pixel points belonging to the part image in the image to be processed.
In this embodiment of the present application, the semantic segmentation module includes: a grouping unit and a classifying unit.
The grouping unit is used for grouping pixels in the image to be processed based on a plurality of similarities to obtain a first pixel set; classification unit
And the method is used for classifying the first pixel set based on the target type to obtain a semantic segmentation result.
In this embodiment of the present application, the processing module includes: a first processing unit.
The first processing unit is used for performing cross attention processing on the first image feature and the plurality of query vectors by using the decoder model to obtain a plurality of attention features.
In this embodiment of the present application, the first processing unit includes: clustering subunits and updating subunits.
The clustering subunit is used for clustering the first image features based on a plurality of query vectors to obtain a clustering result; the updating subunit is used for updating the plurality of query vectors based on the clustering result to obtain a plurality of attention features.
In this embodiment of the present application, the processing module further includes: the device comprises a third acquisition unit, an extraction unit, a second processing unit, an identification unit, a second determination unit and an adjustment unit.
The third obtaining unit is configured to obtain a training sample, where the training sample includes: a sample image and a preset semantic segmentation result, wherein the sample image comprises a preset image of a preset organ of a preset biological object; the extraction unit is used for extracting the characteristics of the sample image to obtain second image characteristics of the preset image; the second processing unit is used for performing cross attention processing on the second image feature and the plurality of query vectors by using the decoder model to obtain a plurality of sample attention features; the identifying unit is used for identifying the sample image based on the second image feature and the plurality of sample attention features to obtain a sample processing result of the sample image, wherein the sample processing result comprises: the sample semantic segmentation result is used for representing pixel points belonging to a preset image in the sample image, and the pixel sets in the second pixel set respectively comprise pixel points of different preset types in the sample image; the second determining unit is used for determining a total loss value of the decoder model based on the sample semantic segmentation result, the preset semantic segmentation result and the second pixel set; the adjustment unit is used for adjusting model parameters of the decoder model based on the total loss value.
In this embodiment of the present application, the identification unit includes: a first determination subunit, a grouping subunit, and a first merging subunit.
The first determining subunit is configured to determine similarities between the second image feature and a plurality of sample attention features, so as to obtain a plurality of sample similarities; the grouping subunit is used for grouping pixels in the sample image based on the multiple sample similarities to obtain an initial pixel set; the first merging subunit is configured to merge the initial pixel set based on a preset category to obtain a second pixel set.
In this embodiment of the present application, the second determining unit includes: the system comprises a second determining subunit, a second merging subunit, a third determining subunit and an acquiring subunit.
The second determining subunit is used for determining a first loss value of the decoder model based on the sample semantic segmentation result and a preset semantic segmentation result; the second merging subunit merges the preset semantic segmentation results based on preset categories corresponding to the preset semantic segmentation results to obtain a preset pixel set; the third determining subunit is configured to determine a second loss value of the decoder model based on the second pixel set and the preset pixel set; the obtaining subunit is configured to obtain a weighted sum of the first loss value and the second loss value, so as to obtain a total loss value.
It should be noted that, the preferred embodiments in the foregoing examples of the present application are the same as the embodiments provided in example 1, the application scenario and the implementation process, but are not limited to the embodiments provided in example 1.
Example 6
There is also provided, according to an embodiment of the present application, an image processing apparatus for implementing the above image processing method, and fig. 10 is a schematic diagram of an image processing apparatus according to embodiment 6 of the present application, as shown in fig. 10, including: a first display module 1002 and a second display module 1004.
The first display module is used for responding to an input instruction acted on the operation interface and displaying an image to be processed on the operation interface, wherein the image to be processed comprises part images of at least one organ of a biological object; the second display module is used for responding to an image processing instruction acting on the operation interface, displaying a target recognition result of the image to be processed on the operation interface, wherein the target recognition result is used for representing the probability that pixel points in the image to be processed meet preset conditions, the target recognition result is obtained by recognizing the image to be processed based on a first image feature and a plurality of attention features of the part image, the attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed.
Here, it should be noted that the first display module 1002 and the second display module 1004 correspond to steps S602 to S604 in embodiment 2, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above modules may be run as part of the apparatus in the AR/VR device provided in embodiment 1.
It should be noted that, the preferred embodiments in the foregoing examples of the present application are the same as the embodiments provided in example 1, the application scenario and the implementation process, but are not limited to the embodiments provided in example 1.
Example 7
There is also provided, according to an embodiment of the present application, an image processing apparatus for implementing the above image processing method, and fig. 11 is a schematic diagram of an image processing apparatus according to embodiment 7 of the present application, as shown in fig. 11, including: a presentation module 1102, an extraction module 1104, a processing module 1106, an identification module 1108, and a drive module 1110.
The display module is used for displaying an image to be processed on a display picture of the Virtual Reality (VR) device or the Augmented Reality (AR) device, wherein the image to be processed contains part images of at least one organ of a biological object; the extraction module is used for extracting features of the image to be processed to obtain first image features of the part image; the processing module is used for carrying out cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; the recognition module is used for recognizing the image to be processed based on the first image feature and the plurality of attention features to obtain a target recognition result of the image to be processed, wherein the target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition; the driving module is used for driving the VR equipment or the AR equipment to render the target recognition result.
It should be noted that the above-mentioned display module, extraction module, processing module, identification module and driving module correspond to steps S702 to S710 in embodiment 3, and the five modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above modules may be run as part of the apparatus in the AR/VR device provided in embodiment 1.
It should be noted that, the preferred embodiments in the foregoing examples of the present application are the same as the embodiments provided in example 1, the application scenario and the implementation process, but are not limited to the embodiments provided in example 1.
Example 8
There is further provided, according to an embodiment of the present application, an image processing apparatus for implementing the above image processing method, and fig. 12 is a schematic diagram of an image processing apparatus according to embodiment 8 of the present application, as shown in 1200, including: an acquisition module 1202, an extraction module 1204, a processing module 1206, an identification module 1208, and an output module 1210.
The acquisition module is used for acquiring an image to be processed by calling the first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the image to be processed, and the image to be processed comprises part images of at least one organ of a biological object; the extraction module is used for extracting features of the image to be processed to obtain first image features of the part image; the processing module is used for carrying out cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; the recognition module is used for recognizing the image to be processed based on the first image feature and the plurality of attention features to obtain a target recognition result of the image to be processed, wherein the target recognition result is used for representing the probability that the pixel points in the image to be processed meet the preset condition; the output module is used for outputting a target recognition result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target recognition result.
Here, it should be noted that the above-mentioned obtaining module, extracting module, processing module, identifying module and outputting module correspond to steps S802 to S810 in embodiment 4, and the five modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1 above. It should be noted that the above modules may be run as part of the apparatus in the AR/VR device provided in embodiment 1.
It should be noted that, the preferred embodiments in the foregoing examples of the present application are the same as the embodiments provided in example 1, the application scenario and the implementation process, but are not limited to the embodiments provided in example 1.
Example 9
Embodiments of the present application may provide a computer terminal, which may be any one of a group of computer terminals. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network.
In this embodiment, the above-described computer terminal may execute the program code of the following steps in the image processing method: acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, identifying the image to be processed to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
Alternatively, fig. 13 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 13, the computer terminal a may include: one or more (only one is shown) processors 102, memory 104, memory controller, and peripheral interfaces, where the peripheral interfaces are connected to the radio frequency module, audio module, and display. .
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the image recognition method and apparatus in the embodiments of the present application, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the image recognition method described above. The memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, identifying the image to be processed to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
Optionally, the above processor may further execute instructions for: determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; obtaining the maximum similarity in the multiple similarities; and obtaining the opposite number of the maximum similarity to obtain the target recognition result.
Optionally, the above processor may further execute instructions for: determining target types corresponding to the plurality of attention features; determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; based on a plurality of similarities and target types, performing semantic segmentation on pixels in the image to be processed to obtain a target semantic segmentation result, wherein the target semantic segmentation result is used for representing the category of the pixel points belonging to the part image in the image to be processed.
Optionally, the above processor may further execute instructions for: grouping pixels in an image to be processed based on a plurality of similarities to obtain a first pixel set; and classifying the first pixel set based on the target type to obtain a semantic segmentation result.
Optionally, the above processor may further execute instructions for: and performing cross attention processing on the first image feature and the plurality of query vectors by using the decoder model to obtain a plurality of attention features.
Optionally, the above processor may further execute instructions for: clustering the first image features based on a plurality of query vectors to obtain a clustering result; and updating the plurality of query vectors based on the clustering result to obtain a plurality of attention features.
Optionally, the above processor may further execute instructions for: obtaining a training sample, wherein the training sample comprises: a sample image and a preset semantic segmentation result, wherein the sample image comprises a preset image of a preset organ of a preset biological object; extracting features of the sample image to obtain second image features of the preset image; performing cross attention processing on the second image feature and the plurality of query vectors by using the decoder model to obtain a plurality of sample attention features; based on the second image feature and the plurality of sample attention features, identifying the sample image to obtain a sample processing result of the sample image, wherein the sample processing result comprises: the sample semantic segmentation result is used for representing pixel points belonging to a preset image in the sample image, and the pixel sets in the second pixel set respectively comprise pixel points of different preset types in the sample image; determining a total loss value of the decoder model based on the sample semantic segmentation result, the preset semantic segmentation result and the second pixel set; model parameters of the decoder model are adjusted based on the total loss value.
Optionally, the above processor may further execute instructions for: determining the similarity between the second image feature and the plurality of sample attention features to obtain a plurality of sample similarities; grouping pixels in the sample image based on the plurality of sample similarities to obtain an initial pixel set; and merging the initial pixel sets based on the preset category to obtain a second pixel set.
Optionally, the above processor may further execute instructions for: determining a first loss value of the decoder model based on the sample semantic segmentation result and a preset semantic segmentation result; combining the preset semantic segmentation results based on preset categories corresponding to the preset semantic segmentation results to obtain a preset pixel set; determining a second loss value of the decoder model based on the second set of pixels and the preset set of pixels; and obtaining a weighted sum of the first loss value and the second loss value to obtain a total loss value.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: displaying an image to be processed on the operation interface in response to an input instruction acting on the operation interface, wherein the image to be processed contains an image of a region of at least one organ of the biological object; and responding to an image processing instruction acting on an operation interface, and displaying a target recognition result of the image to be processed on the operation interface, wherein the target recognition result is used for representing the probability that pixel points in the image to be processed meet a preset condition, the target recognition result is obtained by recognizing the image to be processed based on a first image feature and a plurality of attention features of the part image, the plurality of attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: displaying an image to be processed on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the image to be processed contains part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and driving the VR device or the AR device to render the target recognition result.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: acquiring an image to be processed by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the image to be processed, and the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and outputting a target identification result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target identification result.
By adopting the embodiment of the application, the image to be processed is obtained, wherein the image to be processed comprises part images of at least one organ of the biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, the image to be processed is identified, and a target identification result of the image to be processed is obtained, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions, and the processing accuracy of the image to be processed is improved. It is easy to note that the image to be processed can be identified based on the first image feature and the plurality of attention features, the object exceeding the distribution in the image to be processed can be positioned, and the abnormal condition in the image to be processed can be accurately determined, so that the processing accuracy of the image to be processed is improved, and the technical problem of lower accuracy of image processing in the related art is solved.
It will be appreciated by those skilled in the art that the configuration shown in fig. 13 is only illustrative, and the computer terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a mobile internet device (MobileInternetDevices, MID), a PAD, etc. Fig. 13 is not limited to the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 13, or have a different configuration than shown in fig. 13.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Example 14
Embodiments of the present application also provide a computer-readable storage medium. Alternatively, in the present embodiment, the above-described computer-readable storage medium may be used to store the program code executed by the image processing method provided in the above-described embodiment 1.
Alternatively, in this embodiment, the above-mentioned computer readable storage medium may be located in any one of the AR/VR device terminals in the AR/VR device network or in any one of the mobile terminals in the mobile terminal group.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, identifying the image to be processed to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that the pixel points in the image to be processed meet the preset condition.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; obtaining the maximum similarity in the multiple similarities; and obtaining the opposite number of the maximum similarity to obtain the target recognition result.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: determining target types corresponding to the plurality of attention features; determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities; based on a plurality of similarities and target types, performing semantic segmentation on pixels in the image to be processed to obtain a target semantic segmentation result, wherein the target semantic segmentation result is used for representing the category of the pixel points belonging to the part image in the image to be processed.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: grouping pixels in an image to be processed based on a plurality of similarities to obtain a first pixel set; and classifying the first pixel set based on the target type to obtain a semantic segmentation result.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: and performing cross attention processing on the first image feature and the plurality of query vectors by using the decoder model to obtain a plurality of attention features.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: clustering the first image features based on a plurality of query vectors to obtain a clustering result; and updating the plurality of query vectors based on the clustering result to obtain a plurality of attention features.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: obtaining a training sample, wherein the training sample comprises: a sample image and a preset semantic segmentation result, wherein the sample image comprises a preset image of a preset organ of a preset biological object; extracting features of the sample image to obtain second image features of the preset image; performing cross attention processing on the second image feature and the plurality of query vectors by using the decoder model to obtain a plurality of sample attention features; based on the second image feature and the plurality of sample attention features, identifying the sample image to obtain a sample processing result of the sample image, wherein the sample processing result comprises: the sample semantic segmentation result is used for representing pixel points belonging to a preset image in the sample image, and the pixel sets in the second pixel set respectively comprise pixel points of different preset types in the sample image; determining a total loss value of the decoder model based on the sample semantic segmentation result, the preset semantic segmentation result and the second pixel set; model parameters of the decoder model are adjusted based on the total loss value.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: determining the similarity between the second image feature and the plurality of sample attention features to obtain a plurality of sample similarities; grouping pixels in the sample image based on the plurality of sample similarities to obtain an initial pixel set; and merging the initial pixel sets based on the preset category to obtain a second pixel set.
Optionally, the above computer readable storage medium is further configured as program code for performing the steps of: determining a first loss value of the decoder model based on the sample semantic segmentation result and a preset semantic segmentation result; combining the preset semantic segmentation results based on preset categories corresponding to the preset semantic segmentation results to obtain a preset pixel set; determining a second loss value of the decoder model based on the second set of pixels and the preset set of pixels; and obtaining a weighted sum of the first loss value and the second loss value to obtain a total loss value.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: displaying an image to be processed on the operation interface in response to an input instruction acting on the operation interface, wherein the image to be processed contains an image of a region of at least one organ of the biological object; and responding to an image processing instruction acting on an operation interface, and displaying a target recognition result of the image to be processed on the operation interface, wherein the target recognition result is used for representing the probability that pixel points in the image to be processed meet a preset condition, the target recognition result is obtained by recognizing the image to be processed based on a first image feature and a plurality of attention features of the part image, the plurality of attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: displaying an image to be processed on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the image to be processed contains part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and driving the VR device or the AR device to render the target recognition result.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: acquiring an image to be processed by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the image to be processed, and the image to be processed comprises part images of at least one organ of a biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; identifying an image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions; and outputting a target identification result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target identification result.
By adopting the embodiment of the application, the image to be processed is obtained, wherein the image to be processed comprises part images of at least one organ of the biological object; extracting features of the image to be processed to obtain first image features of the part image; cross attention processing is carried out on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image; based on the first image feature and the plurality of attention features, the image to be processed is identified, and a target identification result of the image to be processed is obtained, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions, and the processing accuracy of the image to be processed is improved. It is easy to note that the image to be processed can be identified based on the first image feature and the plurality of attention features, the object exceeding the distribution in the image to be processed can be positioned, and the abnormal condition in the image to be processed can be accurately determined, so that the processing accuracy of the image to be processed is improved, and the technical problem of lower accuracy of image processing in the related art is solved.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.
Claims (14)
1. An image processing method, comprising:
acquiring an image to be processed, wherein the image to be processed comprises part images of at least one organ of a biological object;
extracting features of the image to be processed to obtain first image features of the part image;
performing cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image;
and identifying the image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that the pixel points in the image to be processed meet a preset condition.
2. The method of claim 1, wherein identifying the image to be processed based on the first image feature and the plurality of attention features results in a target identification result for the image to be processed, comprising:
Determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities;
obtaining the maximum similarity in the plurality of similarities;
and obtaining the opposite number of the maximum similarity to obtain the target recognition result.
3. The method according to claim 1, wherein the method further comprises:
determining target types corresponding to the plurality of attention features;
determining the similarity between the first image feature and the plurality of attention features to obtain a plurality of similarities;
and carrying out semantic segmentation on pixels in the image to be processed based on the multiple similarities and the target type to obtain a target semantic segmentation result, wherein the target semantic segmentation result is used for representing the category of the pixel points belonging to the part image in the image to be processed.
4. The method of claim 3, wherein, based on the plurality of similarities and the target type,
performing semantic segmentation on pixels in the image to be processed to obtain a semantic segmentation result, wherein the semantic segmentation result comprises the following steps:
grouping pixels in the image to be processed based on the multiple similarities to obtain a first pixel set;
And classifying the first pixel set based on the target type to obtain the semantic segmentation result.
5. The method of claim 1, wherein cross-attention processing the first image feature and the plurality of query vectors to obtain a plurality of attention features, comprising:
and performing cross attention processing on the first image feature and the plurality of query vectors by using a decoder model to obtain the plurality of attention features.
6. The method of claim 5, wherein cross-attention processing the first image feature and the plurality of query vectors using a decoder model to obtain the plurality of attention features, comprising:
clustering the first image features based on the plurality of query vectors to obtain a clustering result;
updating the plurality of query vectors based on the clustering result to obtain the plurality of attention features.
7. The method of claim 5, wherein the method further comprises:
obtaining a training sample, wherein the training sample comprises: a sample image and a preset semantic segmentation result, wherein the sample image comprises a preset image of a preset organ of a preset biological object;
Extracting features of the sample image to obtain second image features of the preset image;
performing cross attention processing on the second image feature and the plurality of query vectors by using the decoder model to obtain a plurality of sample attention features;
identifying the sample image based on the second image feature and the plurality of sample attention features to obtain a sample processing result of the sample image, wherein the sample processing result comprises: a sample semantic segmentation result and a second pixel set, wherein the sample semantic segmentation result is used for representing pixel points belonging to the preset image in the sample image, and the pixel sets in the second pixel set respectively comprise pixel points of different preset types in the sample image;
determining a total loss value of the decoder model based on the sample semantic segmentation result, the preset semantic segmentation result and the second pixel set;
model parameters of the decoder model are adjusted based on the total loss value.
8. The method of claim 7, wherein identifying the sample image based on the second image feature and the plurality of sample attention features to obtain the second set of pixels comprises:
Determining the similarity between the second image feature and the plurality of sample attention features to obtain a plurality of sample similarities;
grouping pixels in the sample image based on the plurality of sample similarities to obtain an initial pixel set;
and merging the initial pixel sets based on a preset category to obtain the second pixel set.
9. The method of claim 7, wherein determining the total loss value of the decoder model based on the sample semantic segmentation result, the preset semantic segmentation result, and the second set of pixels comprises:
determining a first loss value of the decoder model based on the sample semantic segmentation result and the preset semantic segmentation result;
combining the preset semantic segmentation results based on preset categories corresponding to the preset semantic segmentation results to obtain a preset pixel set;
determining a second loss value of the decoder model based on the second set of pixels and the preset set of pixels;
and obtaining a weighted sum of the first loss value and the second loss value to obtain the total loss value.
10. An image processing method, comprising:
Displaying an image to be processed on an operation interface in response to an input instruction acting on the operation interface, wherein the image to be processed comprises part images of at least one organ of a biological object;
and responding to an image processing instruction acting on the operation interface, and displaying a target recognition result of the image to be processed on the operation interface, wherein the target recognition result is used for representing the probability that pixel points in the image to be processed meet a preset condition, the target recognition result is obtained by recognizing the image to be processed based on a first image feature and a plurality of attention features of the part image, the plurality of attention features are obtained by carrying out cross attention processing on the first image feature and a plurality of query vectors, different query vectors are used for representing feature types of different pixel points in the part image, and the first image feature is obtained by carrying out feature extraction on the image to be processed.
11. An image processing method, comprising:
displaying an image to be processed on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the image to be processed contains part images of at least one organ of a biological object;
Extracting features of the image to be processed to obtain first image features of the part image;
performing cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image;
identifying the image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions;
and driving the VR equipment or the AR equipment to render the target recognition result.
12. An image processing method, comprising:
acquiring an image to be processed by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the image to be processed, and the image to be processed comprises a part image of at least one organ of a biological object;
extracting features of the image to be processed to obtain first image features of the part image;
performing cross attention processing on the first image feature and a plurality of query vectors to obtain a plurality of attention features, wherein different query vectors are used for representing feature categories of different pixel points in the part image;
Identifying the image to be processed based on the first image feature and the plurality of attention features to obtain a target identification result of the image to be processed, wherein the target identification result is used for representing the probability that pixel points in the image to be processed meet preset conditions;
and outputting the target recognition result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target recognition result.
13. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program, when run, controls a device in which the computer readable storage medium is located to perform the method of any one of claims 1 to 12.
14. A computer terminal, comprising:
a processor;
a memory coupled to the processor for providing instructions for the processor to perform the method of any one of claims 1 to 12.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410782049.0A CN118691568A (en) | 2022-12-30 | 2022-12-30 | Image processing method, computer-readable storage medium, and computer terminal |
CN202211731783.1A CN116188392B (en) | 2022-12-30 | 2022-12-30 | Image processing method, computer-readable storage medium, and computer terminal |
PCT/CN2023/143655 WO2024141092A1 (en) | 2022-12-30 | 2023-12-29 | Image processing method, computer-readable storage medium, and computer terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211731783.1A CN116188392B (en) | 2022-12-30 | 2022-12-30 | Image processing method, computer-readable storage medium, and computer terminal |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410782049.0A Division CN118691568A (en) | 2022-12-30 | 2022-12-30 | Image processing method, computer-readable storage medium, and computer terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116188392A true CN116188392A (en) | 2023-05-30 |
CN116188392B CN116188392B (en) | 2024-06-25 |
Family
ID=86439571
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211731783.1A Active CN116188392B (en) | 2022-12-30 | 2022-12-30 | Image processing method, computer-readable storage medium, and computer terminal |
CN202410782049.0A Pending CN118691568A (en) | 2022-12-30 | 2022-12-30 | Image processing method, computer-readable storage medium, and computer terminal |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410782049.0A Pending CN118691568A (en) | 2022-12-30 | 2022-12-30 | Image processing method, computer-readable storage medium, and computer terminal |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN116188392B (en) |
WO (1) | WO2024141092A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116758100A (en) * | 2023-08-17 | 2023-09-15 | 神州医疗科技股份有限公司 | 3D medical image segmentation system and method |
CN117058100A (en) * | 2023-08-14 | 2023-11-14 | 阿里巴巴达摩院(杭州)科技有限公司 | Image recognition method, electronic device, and computer-readable storage medium |
WO2024141092A1 (en) * | 2022-12-30 | 2024-07-04 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing method, computer-readable storage medium, and computer terminal |
WO2024146649A1 (en) * | 2023-01-06 | 2024-07-11 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing methods, storage medium and computer terminal |
CN118379505A (en) * | 2024-06-25 | 2024-07-23 | 杭州声贝软件技术有限公司 | Construction method and construction device of urban road ponding segmentation model |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118504779B (en) * | 2024-07-16 | 2024-09-27 | 青岛阅海信息服务有限公司 | Intelligent correction method for sea wave forecast |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10430946B1 (en) * | 2019-03-14 | 2019-10-01 | Inception Institute of Artificial Intelligence, Ltd. | Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques |
CN111583188A (en) * | 2020-04-15 | 2020-08-25 | 武汉联影智融医疗科技有限公司 | Operation navigation mark point positioning method, storage medium and computer equipment |
CN112800239A (en) * | 2021-01-22 | 2021-05-14 | 中信银行股份有限公司 | Intention recognition model training method, intention recognition method and device |
CN113762262A (en) * | 2021-05-19 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Image data screening method, image segmentation model training method, image data screening device, image segmentation model training device and storage medium |
CN113837965A (en) * | 2021-09-26 | 2021-12-24 | 北京百度网讯科技有限公司 | Image definition recognition method and device, electronic equipment and storage medium |
CN113962306A (en) * | 2021-10-22 | 2022-01-21 | 杭州睿胜软件有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN114170468A (en) * | 2022-02-14 | 2022-03-11 | 阿里巴巴达摩院(杭州)科技有限公司 | Text recognition method, storage medium and computer terminal |
CN114581698A (en) * | 2022-01-20 | 2022-06-03 | 江南大学 | Target classification method based on space cross attention mechanism feature fusion |
CN115018809A (en) * | 2022-06-28 | 2022-09-06 | 华中科技大学 | Target area segmentation and identification method and system of CT image |
CN115205544A (en) * | 2022-07-26 | 2022-10-18 | 福州大学 | Synthetic image harmony method and system based on foreground reference image |
CN115457288A (en) * | 2022-09-26 | 2022-12-09 | 北京易航远智科技有限公司 | Multi-target tracking method and device based on aerial view angle, storage medium and equipment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11900592B2 (en) * | 2020-12-03 | 2024-02-13 | Ping An Technology (Shenzhen) Co., Ltd. | Method, device, and storage medium for pancreatic mass segmentation, diagnosis, and quantitative patient management |
CN114463312A (en) * | 2022-02-10 | 2022-05-10 | 华中科技大学同济医学院附属协和医院 | Fracture image fine recognition network construction method based on cross attention mechanism |
CN114565035A (en) * | 2022-02-24 | 2022-05-31 | 集美大学 | Tongue picture analysis method, terminal equipment and storage medium |
CN115170934B (en) * | 2022-09-05 | 2022-12-23 | 粤港澳大湾区数字经济研究院(福田) | Image segmentation method, system, equipment and storage medium |
CN116188392B (en) * | 2022-12-30 | 2024-06-25 | 阿里巴巴(中国)有限公司 | Image processing method, computer-readable storage medium, and computer terminal |
-
2022
- 2022-12-30 CN CN202211731783.1A patent/CN116188392B/en active Active
- 2022-12-30 CN CN202410782049.0A patent/CN118691568A/en active Pending
-
2023
- 2023-12-29 WO PCT/CN2023/143655 patent/WO2024141092A1/en unknown
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10430946B1 (en) * | 2019-03-14 | 2019-10-01 | Inception Institute of Artificial Intelligence, Ltd. | Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques |
CN111583188A (en) * | 2020-04-15 | 2020-08-25 | 武汉联影智融医疗科技有限公司 | Operation navigation mark point positioning method, storage medium and computer equipment |
CN112800239A (en) * | 2021-01-22 | 2021-05-14 | 中信银行股份有限公司 | Intention recognition model training method, intention recognition method and device |
CN113762262A (en) * | 2021-05-19 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Image data screening method, image segmentation model training method, image data screening device, image segmentation model training device and storage medium |
CN113837965A (en) * | 2021-09-26 | 2021-12-24 | 北京百度网讯科技有限公司 | Image definition recognition method and device, electronic equipment and storage medium |
CN113962306A (en) * | 2021-10-22 | 2022-01-21 | 杭州睿胜软件有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN114581698A (en) * | 2022-01-20 | 2022-06-03 | 江南大学 | Target classification method based on space cross attention mechanism feature fusion |
CN114170468A (en) * | 2022-02-14 | 2022-03-11 | 阿里巴巴达摩院(杭州)科技有限公司 | Text recognition method, storage medium and computer terminal |
CN115018809A (en) * | 2022-06-28 | 2022-09-06 | 华中科技大学 | Target area segmentation and identification method and system of CT image |
CN115205544A (en) * | 2022-07-26 | 2022-10-18 | 福州大学 | Synthetic image harmony method and system based on foreground reference image |
CN115457288A (en) * | 2022-09-26 | 2022-12-09 | 北京易航远智科技有限公司 | Multi-target tracking method and device based on aerial view angle, storage medium and equipment |
Non-Patent Citations (2)
Title |
---|
BOWEN CHENG等: ""Masked-attention Mask Transformer for Universal Image Segmentation"", 《2022IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, 27 September 2022 (2022-09-27), pages 1280 - 1289 * |
BOWEN CHENG等: ""Per-Pixel classification is Not All You Need for Semantic Segmentation"", 《ARXIV》, 31 October 2021 (2021-10-31), pages 1 - 17 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024141092A1 (en) * | 2022-12-30 | 2024-07-04 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing method, computer-readable storage medium, and computer terminal |
WO2024146649A1 (en) * | 2023-01-06 | 2024-07-11 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing methods, storage medium and computer terminal |
CN117058100A (en) * | 2023-08-14 | 2023-11-14 | 阿里巴巴达摩院(杭州)科技有限公司 | Image recognition method, electronic device, and computer-readable storage medium |
CN116758100A (en) * | 2023-08-17 | 2023-09-15 | 神州医疗科技股份有限公司 | 3D medical image segmentation system and method |
CN118379505A (en) * | 2024-06-25 | 2024-07-23 | 杭州声贝软件技术有限公司 | Construction method and construction device of urban road ponding segmentation model |
CN118379505B (en) * | 2024-06-25 | 2024-08-23 | 杭州声贝软件技术有限公司 | Construction method and construction device of urban road ponding segmentation model |
Also Published As
Publication number | Publication date |
---|---|
CN118691568A (en) | 2024-09-24 |
WO2024141092A1 (en) | 2024-07-04 |
CN116188392B (en) | 2024-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116188392B (en) | Image processing method, computer-readable storage medium, and computer terminal | |
CN109670532B (en) | Method, device and system for identifying abnormality of biological organ tissue image | |
Zhang et al. | Adafuse: Adaptive multiview fusion for accurate human pose estimation in the wild | |
US12094209B2 (en) | Video data processing method and apparatus, device, and medium | |
Zhang et al. | Detection of co-salient objects by looking deep and wide | |
JP6458394B2 (en) | Object tracking method and object tracking apparatus | |
CN111275080A (en) | Artificial intelligence-based image classification model training method, classification method and device | |
CN111414946B (en) | Artificial intelligence-based medical image noise data identification method and related device | |
CN108205684B (en) | Image disambiguation method, device, storage medium and electronic equipment | |
CN110689025A (en) | Image recognition method, device and system, and endoscope image recognition method and device | |
CN112132197A (en) | Model training method, image processing method, device, computer equipment and storage medium | |
CN112581438B (en) | Slice image recognition method and device, storage medium and electronic equipment | |
EP4181059A1 (en) | Medical image processing method, apparatus, device, storage medium, and product | |
CN112906730B (en) | Information processing method, device and computer readable storage medium | |
CN113033507B (en) | Scene recognition method and device, computer equipment and storage medium | |
WO2023221697A1 (en) | Method and apparatus for training image recognition model, device and medium | |
CN111598144A (en) | Training method and device of image recognition model | |
Venegas et al. | Automatic ladybird beetle detection using deep-learning models | |
CN113723310B (en) | Image recognition method and related device based on neural network | |
CN113222989A (en) | Image grading method and device, storage medium and electronic equipment | |
Sofka et al. | Progressive data transmission for anatomical landmark detection in a cloud | |
CN108229514A (en) | Object detecting method, device and electronic equipment | |
CN111079704A (en) | Face recognition method and device based on quantum computation | |
CN116012586B (en) | Image processing method, storage medium and computer terminal | |
CN117724612B (en) | Intelligent video target automatic monitoring system and method based on man-machine interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |