CN114820591B - Image processing method, image processing apparatus, electronic device, and medium - Google Patents

Image processing method, image processing apparatus, electronic device, and medium Download PDF

Info

Publication number
CN114820591B
CN114820591B CN202210635894.6A CN202210635894A CN114820591B CN 114820591 B CN114820591 B CN 114820591B CN 202210635894 A CN202210635894 A CN 202210635894A CN 114820591 B CN114820591 B CN 114820591B
Authority
CN
China
Prior art keywords
region
image
processed
interest
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210635894.6A
Other languages
Chinese (zh)
Other versions
CN114820591A (en
Inventor
张可欣
王子腾
丁佳
吕晨翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yizhun Intelligent Technology Co ltd
Original Assignee
Beijing Yizhun Medical AI Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhun Medical AI Co Ltd filed Critical Beijing Yizhun Medical AI Co Ltd
Priority to CN202210635894.6A priority Critical patent/CN114820591B/en
Publication of CN114820591A publication Critical patent/CN114820591A/en
Application granted granted Critical
Publication of CN114820591B publication Critical patent/CN114820591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T3/067
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

The embodiment of the disclosure provides an image processing method, an image processing device, electronic equipment and a medium. The image processing method comprises the steps of obtaining a breast sectional image, wherein the breast sectional image comprises an image sequence formed by a plurality of images, selecting a plurality of images to be processed from the image sequence in a non-continuous mode, respectively identifying interested areas in the images to be processed, obtaining display priority of each interested area, projecting the interested areas in the images to be processed to the same plane to filter the interested areas, and selecting a plurality of interested areas with the highest display priority from the filtered interested areas to output.

Description

Image processing method, image processing apparatus, electronic device, and medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an image processing method and apparatus, an electronic device, and a medium.
Background
In the field of medical imaging, a sequence of medical images from a patient is often displayed. Computed Tomography (CT), such as breast tomography (DBT), can provide a three-dimensional reconstruction of a structure or tissue of a patient's breast that can be displayed as a sequence of two-dimensional cross-sections.
A particular image from the sequence of medical images may include a region of particular interest to a physician or other user. There are some methods for detecting and prompting a user for a region of interest within a sequence of medical images that determine the region of interest by detecting the feature location in each two-dimensional cross-section. The region of interest may be outlined by a box, indicated by an arrow, or highlighted by a color and/or intensity change.
For reminding a doctor or other users of interested areas in the breast tomography images, the methods have the problems of high resource consumption and low detection efficiency.
Disclosure of Invention
To solve the problems in the related art, embodiments of the present disclosure provide an image processing method, an image processing apparatus, an electronic device, and a medium.
One aspect of the present disclosure provides an image processing method, including: the method comprises the steps of obtaining a breast tomography image, wherein the breast tomography image comprises an image sequence formed by a plurality of images, selecting a plurality of images to be processed discontinuously from the image sequence, respectively identifying interested areas in each image to be processed, obtaining the display priority of each interested area, projecting the interested areas in the images to be processed to the same plane to filter the interested areas, and selecting a plurality of interested areas with the highest display priority from the filtered interested areas to output.
Another aspect of the present disclosure provides an image processing apparatus including an obtaining module, a recognition module, a filtering module, and an output module. An obtaining module configured to obtain a breast sectional image comprising an image sequence composed of a plurality of images. The identification module is configured to discontinuously select a plurality of images to be processed from the image sequence, respectively identify an interested area in each image to be processed, and obtain the display priority of each interested area. A filtering module configured to project regions of interest in the plurality of images to be processed to the same plane to filter the regions of interest. And the output module is configured to select a plurality of regions of interest with the highest display priority from the filtered regions of interest for output.
Another aspect of the disclosure provides an electronic device comprising at least one processor and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to: the method comprises the steps of obtaining a breast tomography image, wherein the breast tomography image comprises an image sequence formed by a plurality of images, selecting a plurality of images to be processed discontinuously from the image sequence, respectively identifying interested areas in each image to be processed, obtaining the display priority of each interested area, projecting the interested areas in the images to be processed to the same plane to filter the interested areas, and selecting a plurality of interested areas with the highest display priority from the filtered interested areas to output.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-readable instructions for implementing the image processing method as described above when executed by a processor.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the image processing method as described above when executed.
In some scenarios, since the physician does not care about the specific situation of each region of interest, the image processing method provided by the embodiment of the present disclosure selects a plurality of images to be processed discontinuously from the image sequence, identifies the region of interest in each of the images to be processed respectively, projects the regions of interest onto the same plane for filtering, and selects a plurality of regions of interest with the highest display priority from the filtered regions of interest for outputting, thereby saving a large amount of computing resources and rapidly and definitely providing the detection results of the plurality of regions of interest.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 schematically shows a system architecture diagram to which an image processing method of an embodiment of the present disclosure is applied;
FIG. 2 schematically illustrates a flow chart of an image processing method of an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic view of a breast tomogram of an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of identifying a region of interest of an embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart of identifying a region of interest according to another embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of identifying a region of interest according to another embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of identifying a second feature region of an embodiment of the present disclosure;
fig. 8A schematically illustrates a schematic diagram of an image to be processed according to an embodiment of the present disclosure;
FIG. 8B schematically illustrates a schematic view of an image with a mask according to an embodiment of the disclosure;
FIG. 8C schematically illustrates a schematic diagram of preliminary results of body contouring for embodiments of the present disclosure;
FIG. 8D schematically illustrates a schematic view of the body contour expansion of an embodiment of the present disclosure;
fig. 8E schematically illustrates a schematic view of a region of interest of an embodiment of the present disclosure;
fig. 9 schematically shows a block diagram of an image processing apparatus of an embodiment of the present disclosure; and
fig. 10 schematically shows a structural diagram of a computer system adapted to implement the image processing method of the embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the acquisition or presentation of data in this disclosure is authorized, confirmed, or actively selected by the user.
An embodiment of the present disclosure provides an image processing method, including: the method comprises the steps of obtaining a breast tomographic image, wherein the breast tomographic image comprises an image sequence formed by a plurality of images, selecting a plurality of images to be processed discontinuously from the image sequence, respectively identifying interested regions in each image to be processed, obtaining the display priority of each interested region, projecting the interested regions in the images to be processed to the same plane to filter the interested regions, and selecting a plurality of interested regions with the highest display priority from the filtered interested regions for outputting, so that under the condition that a doctor does not care about the specific situation of each interested region, a large amount of operation resources can be saved, and the detection results of the plurality of interested regions can be provided rapidly and definitely.
According to an embodiment of the present disclosure, the region of interest comprises a microcalcification prediction region. One of the three-dimensional reconstructed cross sections of the breast tomogram may show microcalcified clusters. The discovery of microcalcification clusters is advantageous for indicating a patient's precancerous state, but physicians have less specific information needs for microcalcification clusters. Therefore, the method of the embodiment of the disclosure can quickly display the most obvious plurality of micro calcified clusters detected by sampling to the doctor, and is convenient for the doctor to make a judgment in time.
It should be noted that the region of interest may also include other types of focal or non-focal predicted regions besides microcalcification clusters, as long as similar requirements exist. For example, in some scenarios, the region of interest may also include bumps or the like.
Technical solutions provided by the embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 schematically shows a system architecture diagram to which an image processing method according to an embodiment of the present disclosure is applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. Various client applications may be installed on the terminal devices 101, 102, 103. Such as browser-type applications, search-type applications, instant messaging-type tools, and so forth.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various special purpose or general purpose electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module.
The server 105 may be a server that provides various services, such as a backend server that provides services for client applications installed on the terminal devices 101, 102, 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module.
The image processing method provided by the embodiment of the present disclosure may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105, for example. Alternatively, the image processing method of the embodiment of the present disclosure may be partially executed by the terminal apparatuses 101, 102, 103, and the other part is executed by the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 illustrates a flowchart of an image processing method according to an embodiment of the present specification.
As shown in fig. 2, the image processing method includes operations S202, S204, S206, and S208.
In operation S202, a breast tomographic image including an image sequence composed of a plurality of images is obtained.
In operation S204, a plurality of images to be processed are non-continuously selected from the image sequence, regions of interest in each of the images to be processed are respectively identified, and a display priority of each region of interest is obtained.
In operation S206, regions of interest in the plurality of images to be processed are projected to the same plane to filter the regions of interest.
In operation S208, a number of regions of interest exhibiting the highest priority are selected from the filtered regions of interest for output.
Breast Tomography (DBT) is an innovation of Digital mammography. Breast tomography uses an X-ray source that moves in an arc around the breast to acquire information from the breast tissue and reconstruct high resolution images as "slices" of breast tissue having a thickness of 0.5-1.0 mm. By presenting an image of a particular plane within the breast, tomosynthesis can eliminate overlapping breast tissue that may mask the lesion in a standard mammography when the three-dimensional breast is projected onto a two-dimensional image plane, thus, tomosynthesis techniques more clearly visualize breast lesions and reduce false positive results due to the superimposition of adjacent normal breast tissue.
Fig. 3 schematically shows a schematic view of a breast tomographic image of an embodiment of the present disclosure.
As shown in fig. 3, the breast tomographic image includes an image sequence composed of a plurality of images. For example, an image of each layer of the breast may be taken at a thickness of 1 mm, and taking a 100 mm thickness of the breast gland will result in 100 two-dimensional images. According to the embodiment of the present disclosure, breast tomograms may be taken for left and right breasts, respectively. The breast tomograms of the same breast can be taken in different positions and used in combination to obtain more accurate medical information. The image processing method of the embodiment of the disclosure is used for processing a group of two-dimensional images of a certain mammary gland taken at a certain position.
Reference is made back to fig. 2. In operation S204, a plurality of images to be processed are non-continuously selected from the image sequence, and each layer of image can be used for detecting the region of interest separately. Microcalcification clusters, masses, etc. exist in three-dimensional space, so the same region of interest can exist simultaneously on multiple adjacent two-dimensional images. In order to save computing resources and improve processing efficiency, only part of the images to be processed can be detected, and not all two-dimensional images can be detected.
According to the embodiment of the disclosure, a plurality of images to be processed can be selected from the image sequence at predetermined intervals. For example, a two-dimensional image may be selected as the image to be processed every 10 layers, so that the resource cost and time cost of calculation may be greatly reduced. Alternatively, a plurality of images to be processed may be randomly selected from the image sequence.
In operation S204, after the images to be processed are selected, the regions of interest in each selected image to be processed are identified, and the display priority of each region of interest is obtained. For example, existing methods may be selected to identify microcalcification clusters in a two-dimensional breast image. Besides, the present disclosure provides various embodiments different from the existing methods, and the method for identifying a region of interest according to the embodiment of the present disclosure is described below with reference to fig. 4 to 7 and fig. 8A to 8E.
In order to inhibit the occurrence of such a situation, the technical scheme adopted in the field is to increase the number of network layers and increase training samples, however, the training difficulty is greatly increased, and the processing efficiency is reduced.
Fig. 4 schematically illustrates a flow chart of identifying a region of interest according to an embodiment of the present disclosure.
As shown in fig. 4, the method includes operations S402, S404, and S406.
In operation S402, a first feature region in the to-be-processed image is identified, where the first feature region is a preliminary prediction region of a region of interest.
According to embodiments of the present disclosure, the region of interest may be initially identified using existing segmentation algorithms. For example, the trained neural network may be utilized to process the image to be processed to obtain the first feature region. The neural network may be, for example, U Net, and the trained U Net may be used to process the to-be-processed image to obtain the first feature region. U Net was published in 2015 as an encoder-decoder structure, whose network structure is symmetrical and called U Net because it looks like the english letter U, and which can achieve good accuracy with limited data sets.
According to the method of the embodiment of the disclosure, when the preliminary prediction regions of the region of interest are detected, the algorithm further gives the scores of the respective preliminary prediction regions, and the higher the score, the higher the probability that the region of interest is. The score may be used as a presentation priority for the region of interest.
In operation S404, a second feature region in the to-be-processed image is identified through an image gradient algorithm, where the second feature region is a body contour region.
The image gradient algorithm is an algorithm for detecting edges by traversing an image by using a gradient operator, and belongs to a traditional image processing method. In a specific scene of body contour detection of a breast image, since there is generally no signal except for a breast and a breast region has a signal, compared with a machine learning algorithm, the embodiment of the present disclosure adopts a traditional image gradient algorithm, which is not only high in efficiency and speed, but also extremely high in accuracy.
In operation S406, the first feature region is eroded based on the second feature region, so as to obtain a region of interest in the to-be-processed image.
According to the embodiment of the disclosure, an operation may be performed pixel by pixel using the second feature map showing the second feature region and the first feature map showing the first feature region to remove the second feature region from the first feature region.
Exemplarily, the first feature map includes a background region and a first feature region, the grayscale value of the background region is 0, and the grayscale value of the first feature region is 1; the second feature map comprises a background area and a second feature area, wherein the gray value of the background area is 0, and the gray value of the second feature area is 1. In this example, the second feature map may be inverted, and then the logical and operation may be performed pixel by pixel with the first feature map, so as to obtain the region of interest after the second feature region is removed from the first feature region.
According to the embodiment of the disclosure, the body contour is detected by using an image gradient algorithm, and the primary detection result is corroded by using the body contour, so that compared with a machine learning algorithm, the method for identifying the region of interest of the embodiment of the disclosure has high efficiency and high speed, and can effectively remove the highlight region at the gland edge and inhibit false positive results.
Because the data volume of the breast tomography image is large, the size of a single image to be processed is about 600 × 1500, and the direct segmentation cost is too large, the method for identifying the region of interest provided by another embodiment of the disclosure can improve the operation speed by cutting the image to be processed into blocks which are not overlapped, separately segmenting the blocks, and finally splicing the results together.
Fig. 5 schematically illustrates a flow chart of identifying a region of interest according to another embodiment of the present disclosure.
As shown in fig. 5, the method includes operations S502, S504, and S506.
In operation S502, a to-be-processed image is divided into a plurality of blocks. For example, for a 600 × 1500 to-be-processed image, it may be divided into 10 300 × 300 blocks.
In operation S504, regions of interest in each block are respectively identified, and an identification result of each block is obtained. Similar to operation S402, any available existing method may be selected for use herein. Each block may be processed, for example using U net, resulting in an identification including a predicted region of interest for each block and a score for the respective region of interest. The higher the score, the higher the probability that the region is a region of interest, the score may be taken as the presentation priority of the region of interest.
In operation S506, the recognition results are spliced to obtain the region of interest in the image to be processed. In this step, two or more regions of interest connected after stitching may be combined into one region of interest. The combined region of interest presentation priority may also be determined based on the presentation priorities of the two or more regions of interest after combination.
According to the embodiment of the disclosure, the two methods shown in fig. 4 and 5 can be combined for use, which not only can solve the problem of large data volume of breast tomography, but also can effectively inhibit false positives. The following description is made with reference to the embodiment illustrated in fig. 6.
Fig. 6 schematically illustrates a flow chart of identifying a region of interest according to another embodiment of the present disclosure.
As shown in fig. 6, the method includes operations S602, S604, S606, and S608. Operations S602 and S604 are the same as operations S502 and S504 illustrated in fig. 5, and are not described again here.
The method also performs operation S608 before performing operation S606. Similarly to operation S404 described above, in operation S608, a second feature region in the to-be-processed image is identified through an image gradient algorithm, where the second feature region is a body contour region.
In operation S606, the recognition results are stitched, and the region of interest in the image to be processed is obtained based on the recognition result of the second feature region erosion stitching. This operation S606 may refer to operations S506 and S406 described above, which will not be further described herein.
Reference is now made to the embodiments illustrated in fig. 7 and 8, which further describe possible specific implementations of operation S404 or operation S608.
Fig. 7 schematically illustrates a flowchart of identifying a second feature region according to an embodiment of the present disclosure.
As shown in fig. 7, the method includes operations S702, S704, S706, S708, and S710.
In operation S702, a background gray value is determined. For example, in breast tomography, the background gray value is usually 0.
In operation S704, a mask is formed by assigning a value to a pixel different from the background gray value using a mask gray value. The mask gray value may be any value other than 0, and may be 1 or 255, for example. And assigning a mask gray value to the pixel with the gray value not being 0 to obtain a binary image. The part where the pixel gradation value is the gradation value of the mask is referred to as a mask.
In operation S706, the mask is subjected to an expansion process to eliminate an internal void. According to the embodiment of the present disclosure, since inside the mammary gland, there are also pixels with a gray value of 0, the elimination can be performed by the dilation mask. For example, a portion with a gray value other than 0 may be expanded by 1-2 pixels. In the image thus obtained, the grayscale value of the background region is 0, and the grayscale value of the breast region is not 0.
In operation S708, image gradients are calculated to determine preliminary results of the body contour. The image may be traversed using a gradient operator to detect edges, determining preliminary results for body contours.
In operation S710, the preliminary result is expanded to obtain the second feature region. Since the preliminary result of the body contour is a thin line, which cannot cover the false positive area, the preliminary result needs to be dilated to obtain a thicker contour.
Fig. 8A-8E schematically illustrate various schematic diagrams of a process of processing an image to be processed according to an embodiment of the present disclosure.
According to the embodiment of the present disclosure, an image in the image sequence is first determined as a to-be-processed image, which is a two-dimensional section of a breast sectional image as shown in fig. 8A.
According to the embodiment of the present disclosure, the image to be processed is further subjected to a masking process to obtain an image as shown in fig. 8B. For example, in the image to be processed, the background gray-level value is 0, and is represented as black in fig. 8A or 8B. The mask gray value can be defined to be 255, which is represented as white in fig. 8B. And assigning values to all pixels with the gray values not being 0 to enable the gray values of the pixels to be 255, and obtaining the image with the mask. Then, the mask having a gray value of 255 is subjected to expansion processing to eliminate the internal space, resulting in an image with the mask as shown in fig. 8B.
According to an embodiment of the present disclosure, for images with masks, image gradients are calculated to determine preliminary results of body contours. The image as illustrated in fig. 8B may be traversed using a gradient operator, resulting in a preliminary result of the body contour as illustrated in fig. 8C.
According to an embodiment of the disclosure, the preliminary result illustrated in fig. 8C is subjected to an expansion process to obtain a second feature region, as shown in fig. 8D.
On the other hand, the trained neural network may be used to process the image to be processed, for example, to obtain the first feature region. For example, the neural network may be U Net, and the trained U Net may be used to process the image to be processed to obtain the first feature region. The first feature region includes possible microcalcification clusters as well as noise, which mainly includes noise due to body contours. The first feature is etched using the second feature as shown in fig. 8D, resulting in a region of interest as shown in fig. 8E.
Reference is made back to fig. 2. After the detection of the regions of interest in the respective images to be processed is completed, operation S206 may be performed to project the regions of interest in the plurality of images to be processed to the same plane to filter the regions of interest. For example, during detection of microcalcification clusters, regions of interest in different images to be processed may indicate the same microcalcification cluster. The region of interest in different images to be processed can be projected to the same plane, and then redundant parts in the region of interest can be filtered, so that misleading doctors can be avoided, and one focus can be regarded as two focuses.
According to an embodiment of the present disclosure, the method of filtering may be Non-Maximum Suppression (NMS), i.e., suppressing elements that are not Maximum. The non-maxima suppression method selects the window with the highest score in the neighborhood and suppresses those windows with low scores.
Then, in operation S208, a plurality of regions of interest exhibiting the highest priority are selected from the filtered regions of interest for output. For example, in the detection scenario of microcalcification clusters, the 5 regions of interest exhibiting the highest priority can be retained, and these results would typically be very significant calcifications. The 5 interesting regions may exist in the same two-dimensional image or may be distributed in at most 5 two-dimensional images, and when the images are output, the 1-5 two-dimensional images can be displayed to a user, wherein 5 detection boxes are marked for prompting the user of possible micro calcified clusters.
According to the image processing method provided by the embodiment of the disclosure, a plurality of images to be processed are selected from an image sequence in a non-continuous manner, the interested regions in each image to be processed are respectively identified and projected to the same plane for filtering, and a plurality of interested regions with the highest display priority are selected from the filtered interested regions for outputting, so that a large amount of operation resources can be saved, and detection results of the plurality of interested regions can be rapidly and definitely provided.
Based on the same inventive concept, the present disclosure also provides an image processing apparatus, and the image processing apparatus of the embodiment of the present disclosure is explained below with reference to fig. 9.
Fig. 9 schematically illustrates a block diagram of an image processing apparatus 900 according to an embodiment of the present disclosure. The apparatus 900 may be implemented as part of or all of an electronic device through software, hardware, or a combination of both.
As shown in fig. 9, the image processing apparatus 900 includes an obtaining module 902, a recognition module 904, a filtering module 906, and an output module 908. The image processing apparatus 900 may perform the various methods described above.
An obtaining module 902 configured to obtain a breast tomogram comprising an image sequence composed of a plurality of images;
an identifying module 904 configured to non-continuously select a plurality of images to be processed from the image sequence, respectively identify regions of interest in each of the images to be processed, and obtain a display priority of each region of interest;
a filtering module 906 configured to project regions of interest in the plurality of images to be processed to the same plane to filter the regions of interest;
an output module 908 configured to select a plurality of regions of interest exhibiting the highest priority from the filtered regions of interest for output.
According to the technical scheme of the embodiment of the disclosure, under the condition that a doctor does not care about the specific situation of each region of interest, the image processing device provided by the embodiment of the disclosure can save a large amount of calculation resources and quickly and definitely provide the detection results of a plurality of regions of interest.
According to an embodiment of the present disclosure, the identification module may include a first identification submodule, a second identification submodule, and a corrosion submodule. The first identification submodule is configured to identify a first feature region in the image to be processed, and the first feature region is a preliminary prediction region of a region of interest. The second identification submodule is configured to identify a second feature region in the image to be processed through an image gradient algorithm, wherein the second feature region is a body contour region. The erosion submodule is configured to erode the first feature region based on the second feature region, resulting in a region of interest in the image to be processed.
According to an embodiment of the present disclosure, the recognition module may include a segmentation sub-module, a third recognition sub-module, and a concatenation sub-module. The segmentation submodule is configured to segment one image to be processed into a plurality of blocks. The third identifying submodule is configured to identify the region of interest in each block respectively, and obtain an identification result of each block. And the splicing submodule is configured to splice the identification results to obtain the region of interest in the image to be processed.
According to the embodiment of the disclosure, the apparatus may further include a second identification submodule configured to identify a second feature region in the to-be-processed image through an image gradient algorithm, where the second feature region is a body contour region. The splicing submodule can be further configured to splice the identification results, and obtain the region of interest in the image to be processed based on the identification result of the second characteristic region corrosion splicing.
According to an embodiment of the present disclosure, the second recognition submodule may include a background determination unit, a mask unit, a first expansion unit, a gradient calculation unit, and a second expansion unit. Wherein the background determination unit is configured to determine a background grayscale value. The masking unit is configured to form a mask by assigning a mask gray value to a pixel different from the background gray value. The first expansion unit is configured to perform an expansion process on the mask to eliminate an internal void. The gradient calculation unit is configured to calculate image gradients to determine preliminary results of the body contour. The second expansion unit is configured to perform expansion processing on the preliminary result to obtain the second characteristic region.
According to an embodiment of the present disclosure, non-continuously selecting a plurality of images to be processed from the image sequence includes selecting a plurality of images to be processed from the image sequence at predetermined intervals.
According to an embodiment of the present disclosure, non-continuously selecting a plurality of images to be processed from the image sequence includes randomly selecting a plurality of images to be processed from the image sequence.
According to an embodiment of the present disclosure, the region of interest comprises a microcalcification prediction region.
The present disclosure also discloses an electronic device comprising a memory for storing a program enabling the electronic device to perform the image processing method in any of the above embodiments and a processor configured to execute the program stored in the memory to implement the image processing method as described in any of the above embodiments of fig. 2-8.
Fig. 10 schematically shows a structural diagram of a computer system adapted to implement the image processing method of the embodiment of the present disclosure.
As shown in fig. 10, the computer system 1000 includes a processing unit 1001 that can execute various processes in the above-described embodiments according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the system 1000 are also stored. The processing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary. The processing unit 1001 may be implemented as a CPU, a GPU, a TPU, an FPGA, an NPU, or other processing units.
In particular, the above described methods may be implemented as computer software programs according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the above-described method. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or by programmable hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation on the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be a computer-readable storage medium included in the electronic device or the computer system in the above embodiments; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (14)

1. An image processing method, comprising:
obtaining a breast tomographic image comprising an image sequence consisting of a plurality of images;
selecting a plurality of images to be processed from the image sequence in a non-continuous manner, respectively identifying interested areas in each image to be processed, and obtaining the display priority of each interested area;
projecting interested areas in the images to be processed to the same plane to filter the interested areas, wherein the filtering method comprises non-maximum suppression;
selecting a plurality of regions of interest with the highest display priority from the filtered regions of interest for outputting,
wherein the respectively identifying the region of interest in each of the images to be processed comprises:
identifying a first characteristic region in the image to be processed, wherein the first characteristic region is a preliminary prediction region of an interested region;
identifying a second characteristic region in the image to be processed through an image gradient algorithm, wherein the second characteristic region is a body contour region;
corroding the first characteristic region based on a second characteristic region to obtain a region of interest in the image to be processed,
wherein the etching of the first feature region based on the second feature region includes performing an operation pixel by pixel using a second feature map showing the second feature region and a first feature map showing the first feature region to remove the second feature region from the first feature region.
2. The method of claim 1, wherein said separately identifying a region of interest in each of said to-be-processed images comprises:
dividing an image to be processed into a plurality of blocks;
respectively identifying the interested region in each block to obtain the identification result of each block;
and splicing the identification results to obtain the region of interest in the image to be processed.
3. The method of claim 2, further comprising:
identifying a second characteristic region in the image to be processed through an image gradient algorithm, wherein the second characteristic region is a body contour region,
the splicing the identification result to obtain the region of interest in the image to be processed comprises:
and splicing the identification results, and obtaining the region of interest in the image to be processed based on the identification result of the corrosion splicing of the second characteristic region.
4. The method according to claim 1 or 3, wherein the identifying a second feature region in the to-be-processed image by an image gradient algorithm comprises:
determining a background gray value;
assigning values to pixels different from the background gray value by using a mask gray value to form a mask;
performing expansion treatment on the mask to eliminate an inner cavity;
calculating image gradients to determine preliminary results of the body contour;
and performing expansion processing on the preliminary result to obtain the second characteristic region.
5. The method according to any one of claims 1-3, wherein said selecting a plurality of images to be processed from said image sequence non-consecutively comprises any one of:
selecting a plurality of images to be processed from the image sequence according to a preset interval; or alternatively
And randomly selecting a plurality of images to be processed from the image sequence.
6. The method of any of claims 1-3, wherein the region of interest includes a microcalcification prediction region.
7. An image processing apparatus comprising:
an obtaining module configured to obtain a breast tomographic image comprising an image sequence composed of a plurality of images;
the identification module is configured to discontinuously select a plurality of images to be processed from the image sequence, respectively identify an interested region in each image to be processed, and obtain the display priority of each interested region;
a filtering module configured to project regions of interest in the plurality of images to be processed to the same plane to filter the regions of interest, wherein the filtering method includes non-maximum suppression;
an output module configured to select a plurality of regions of interest with the highest display priority from the filtered regions of interest for output,
wherein the identification module comprises:
the first identification submodule is configured to identify a first characteristic region in the image to be processed, wherein the first characteristic region is a preliminary prediction region of a region of interest;
the second identification submodule is configured to identify a second characteristic region in the image to be processed through an image gradient algorithm, and the second characteristic region is a body contour region;
an erosion submodule configured to erode the first feature region based on a second feature region to obtain a region of interest in the image to be processed,
wherein the eroding the first feature region based on the second feature region comprises performing an operation pixel by pixel using a second feature map showing the second feature region and a first feature map showing the first feature region to cull the second feature region from the first feature region.
8. The apparatus of claim 7, wherein the identification module comprises:
a division submodule configured to divide an image to be processed into a plurality of blocks;
the third identification submodule is configured to identify the region of interest in each block respectively to obtain an identification result of each block;
and the splicing submodule is configured to splice the identification results to obtain an interested area in the image to be processed.
9. The apparatus of claim 8, further comprising:
a second identification submodule configured to identify a second feature region in the to-be-processed image through an image gradient algorithm, wherein the second feature region is a body contour region,
the splicing submodule is further configured to splice the identification results, and obtain an interesting region in the image to be processed based on the identification result of the second characteristic region corrosion splicing.
10. The apparatus of claim 7 or 9, wherein the second identification submodule comprises:
a background determination unit configured to determine a background grayscale value;
a mask unit configured to assign a mask gray value to a pixel different from the background gray value to form a mask;
a first expansion unit configured to perform an expansion process on the mask to eliminate an internal void;
a gradient calculation unit configured to calculate image gradients to determine a preliminary result of the body contour;
a second expansion unit configured to perform expansion processing on the preliminary result to obtain the second characteristic region.
11. The apparatus according to any one of claims 7-9, wherein said non-consecutively selecting a plurality of images to be processed from said sequence of images comprises any one of:
selecting a plurality of images to be processed from the image sequence according to a preset interval; or
And randomly selecting a plurality of images to be processed from the image sequence.
12. The apparatus of any one of claims 7-9, wherein the region of interest comprises a microcalcification prediction region.
13. An electronic device, comprising:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
obtaining a breast tomographic image comprising an image sequence consisting of a plurality of images;
selecting a plurality of images to be processed from the image sequence in a non-continuous manner, respectively identifying an interested area in each image to be processed, and obtaining the display priority of each interested area;
projecting the interested areas in the images to be processed to the same plane to filter the interested areas, wherein the filtering method comprises non-maximum suppression;
selecting a plurality of regions of interest with the highest display priority from the filtered regions of interest for outputting,
wherein the respectively identifying the region of interest in each of the images to be processed comprises:
identifying a first characteristic region in the image to be processed, wherein the first characteristic region is a preliminary prediction region of an interested region;
identifying a second characteristic region in the image to be processed through an image gradient algorithm, wherein the second characteristic region is a body contour region;
corroding the first characteristic region based on the second characteristic region to obtain a region of interest in the image to be processed,
wherein the etching of the first feature region based on the second feature region includes performing an operation pixel by pixel using a second feature map showing the second feature region and a first feature map showing the first feature region to remove the second feature region from the first feature region.
14. A computer-readable storage medium having computer-readable instructions stored thereon, which when executed by a processor, cause the processor to perform the method of any one of claims 1-6.
CN202210635894.6A 2022-06-06 2022-06-06 Image processing method, image processing apparatus, electronic device, and medium Active CN114820591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210635894.6A CN114820591B (en) 2022-06-06 2022-06-06 Image processing method, image processing apparatus, electronic device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210635894.6A CN114820591B (en) 2022-06-06 2022-06-06 Image processing method, image processing apparatus, electronic device, and medium

Publications (2)

Publication Number Publication Date
CN114820591A CN114820591A (en) 2022-07-29
CN114820591B true CN114820591B (en) 2023-02-21

Family

ID=82520408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210635894.6A Active CN114820591B (en) 2022-06-06 2022-06-06 Image processing method, image processing apparatus, electronic device, and medium

Country Status (1)

Country Link
CN (1) CN114820591B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951895A (en) * 2016-01-07 2017-07-14 富士通株式会社 Determine the method and system of the profile of area-of-interest in image
CN109740600A (en) * 2019-01-04 2019-05-10 上海联影医疗科技有限公司 Localization method, device, computer equipment and the storage medium of highlighted focal area
CN109801271A (en) * 2019-01-04 2019-05-24 上海联影医疗科技有限公司 Localization method, device, computer equipment and the storage medium of calcification clusters
CN110996772A (en) * 2017-08-15 2020-04-10 国际商业机器公司 Breast cancer detection
CN111566705A (en) * 2017-12-29 2020-08-21 上海联影医疗科技有限公司 System and method for determining region of interest in medical imaging
CN111583210A (en) * 2020-04-29 2020-08-25 北京小白世纪网络科技有限公司 Automatic breast cancer image identification method based on convolutional neural network model integration
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112419292A (en) * 2020-11-30 2021-02-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
JP2021029387A (en) * 2019-08-20 2021-03-01 コニカミノルタ株式会社 Medical information processing device and program
CN112767346A (en) * 2021-01-18 2021-05-07 北京医准智能科技有限公司 Multi-image-based full-convolution single-stage mammary image lesion detection method and device
CN113160199A (en) * 2021-04-29 2021-07-23 武汉联影医疗科技有限公司 Image recognition method and device, computer equipment and storage medium
CN113343895A (en) * 2021-06-24 2021-09-03 北京欧珀通信有限公司 Target detection method, target detection device, storage medium, and electronic apparatus
CN113674254A (en) * 2021-08-25 2021-11-19 上海联影医疗科技股份有限公司 Medical image abnormal point identification method, equipment, electronic device and storage medium
CN113689412A (en) * 2021-08-27 2021-11-23 中国人民解放军总医院第六医学中心 Thyroid image processing method and device, electronic equipment and storage medium
CN114119546A (en) * 2021-11-25 2022-03-01 推想医疗科技股份有限公司 Method and device for detecting MRI image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337074B (en) * 2013-06-18 2016-01-13 大连理工大学 A kind of method based on active contour model segmentation mammary gland DCE-MRI focus
KR102383134B1 (en) * 2017-11-03 2022-04-06 삼성전자주식회사 Electronic device for processing image based on priority and method for operating thefeof
CN111260642A (en) * 2020-02-12 2020-06-09 上海联影医疗科技有限公司 Image positioning method, device, equipment and storage medium
CN114359545A (en) * 2021-12-27 2022-04-15 北京大学第一医院 Image area identification method and device and electronic equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951895A (en) * 2016-01-07 2017-07-14 富士通株式会社 Determine the method and system of the profile of area-of-interest in image
CN110996772A (en) * 2017-08-15 2020-04-10 国际商业机器公司 Breast cancer detection
CN111566705A (en) * 2017-12-29 2020-08-21 上海联影医疗科技有限公司 System and method for determining region of interest in medical imaging
CN109740600A (en) * 2019-01-04 2019-05-10 上海联影医疗科技有限公司 Localization method, device, computer equipment and the storage medium of highlighted focal area
CN109801271A (en) * 2019-01-04 2019-05-24 上海联影医疗科技有限公司 Localization method, device, computer equipment and the storage medium of calcification clusters
JP2021029387A (en) * 2019-08-20 2021-03-01 コニカミノルタ株式会社 Medical information processing device and program
CN111583210A (en) * 2020-04-29 2020-08-25 北京小白世纪网络科技有限公司 Automatic breast cancer image identification method based on convolutional neural network model integration
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112419292A (en) * 2020-11-30 2021-02-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN112767346A (en) * 2021-01-18 2021-05-07 北京医准智能科技有限公司 Multi-image-based full-convolution single-stage mammary image lesion detection method and device
CN113160199A (en) * 2021-04-29 2021-07-23 武汉联影医疗科技有限公司 Image recognition method and device, computer equipment and storage medium
CN113343895A (en) * 2021-06-24 2021-09-03 北京欧珀通信有限公司 Target detection method, target detection device, storage medium, and electronic apparatus
CN113674254A (en) * 2021-08-25 2021-11-19 上海联影医疗科技股份有限公司 Medical image abnormal point identification method, equipment, electronic device and storage medium
CN113689412A (en) * 2021-08-27 2021-11-23 中国人民解放军总医院第六医学中心 Thyroid image processing method and device, electronic equipment and storage medium
CN114119546A (en) * 2021-11-25 2022-03-01 推想医疗科技股份有限公司 Method and device for detecting MRI image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Local extraction and detection of early stage breast cancers through a microneedle and nano-Ag/MBL film based painless and blood-free strategy;LimingChen等;《Materials Science and Engineering》;20191206;第109卷;1-8 *
Prior information guided auto-contouring of breast gland for deformable image registration in postoperative breast cancer radiotherapy;Xin Xie等;《Quantitative Imaging in Medicine and Surgery》;20211231;4721-4730 *
基于极限学习机的乳腺肿块检测技术研究;王之琼;《中国博士学位论文全文数据库医药卫生科技辑》;20160315;E072-170 *
基于深度学习的肺结节检测研究;方俊炜;《中国优秀硕士学位论文全文数据库信息科技辑》;20190115;I138-4156 *

Also Published As

Publication number Publication date
CN114820591A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN110264469B (en) System and method for improving workflow efficiency in tomosynthesis medical image data reading
Banerjee et al. Automated 3D segmentation of brain tumor using visual saliency
US8873824B2 (en) Breast tomosynthesis with display of highlighted suspected calcifications
US8483462B2 (en) Object centric data reformation with application to rib visualization
US20070280525A1 (en) Methods and Apparatus for Computer Automated Diagnosis of Mammogram Images
EP3814984B1 (en) Systems and methods for automated detection of visual objects in medical images
US10140715B2 (en) Method and system for computing digital tomosynthesis images
CN112927239A (en) Image processing method, image processing device, electronic equipment and storage medium
Swiecicki et al. A generative adversarial network-based abnormality detection using only normal images for model training with application to digital breast tomosynthesis
KR102182357B1 (en) Surgical assist device and method for 3D analysis based on liver cancer area in CT image
CN112116623B (en) Image segmentation method and device
CN114820591B (en) Image processing method, image processing apparatus, electronic device, and medium
CN111563876A (en) Medical image acquisition method and display method
CN117635519A (en) Focus detection method and device based on CT image and computer readable storage medium
CN114708283A (en) Image object segmentation method and device, electronic equipment and storage medium
Huang et al. Neural network analysis applied to tumor segmentation on 3D breast ultrasound images
Coorens et al. Intracerebral Hemorrhage Segmentation on Noncontrast Computed Tomography Using a Masked Loss Function U-Net Approach
CN116433695B (en) Mammary gland region extraction method and system of mammary gland molybdenum target image
JP2005052329A (en) Abnormal shadow candidate detector and program
JP2006000338A (en) Image processing method, apparatus, and program
CN117078664B (en) Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus
Chang et al. Vessel segmentation based on bone-to-bone elimination in brain CT angiography
CN114820592B (en) Image processing apparatus, electronic device, and medium
CN116363152B (en) Image segmentation method, method and device for training image segmentation model
CN111783682B (en) Method, device, equipment and medium for building automatic identification model of orbital fracture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 3011, 2nd Floor, Building A, No. 1092 Jiangnan Road, Nanmingshan Street, Liandu District, Lishui City, Zhejiang Province, 323000

Patentee after: Zhejiang Yizhun Intelligent Technology Co.,Ltd.

Address before: 100000 floor 12, building a, Zhizhen building, No. 7 Zhichun Road, Haidian District, Beijing

Patentee before: Beijing Yizhun Intelligent Technology Co.,Ltd.