WO2019037676A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2019037676A1
WO2019037676A1 PCT/CN2018/101242 CN2018101242W WO2019037676A1 WO 2019037676 A1 WO2019037676 A1 WO 2019037676A1 CN 2018101242 W CN2018101242 W CN 2018101242W WO 2019037676 A1 WO2019037676 A1 WO 2019037676A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
pixel value
matrix
mask map
Prior art date
Application number
PCT/CN2018/101242
Other languages
English (en)
French (fr)
Inventor
王闾威
李正龙
李莹莹
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/619,771 priority Critical patent/US11170482B2/en
Publication of WO2019037676A1 publication Critical patent/WO2019037676A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Definitions

  • the present disclosure relates to an image processing method and apparatus.
  • the suspected diseased area and the surrounding area of the patient are mainly photographed, and then the experienced doctors manually locate the suspected diseased area in the captured image, and finally the suspected disease is based on the artificial positioning result.
  • the area is diagnosed.
  • an image processing method comprising: acquiring a first image including an image formed by a skin part to be diagnosed by a patient to be diagnosed; inputting the first image to a neural network And acquiring position information of the lesion area in the first image; acquiring a boundary of the lesion area in the first image, acquiring an original map and a mask map including the lesion area from the first image; Combining the mask map and the original image to obtain a target image corresponding to the lesion region; wherein the target image is an image for diagnosing the lesion region, and pixel points in the target image The original image and the pixel points in the mask map are in one-to-one correspondence.
  • the fusing the mask map and the original image to obtain a target image corresponding to the lesion region comprises: acquiring a first pixel value of each pixel of the mask map And a second pixel value of each pixel in the original image; the first pixel value of each pixel in the mask map, and the second pixel of a corresponding pixel point in the original image Pixel values are fused to obtain a target pixel value for each pixel; and the target image of the lesion region is formed according to the target pixel value of all the pixel points.
  • the first pixel value of each pixel point in the mask map is merged with the second pixel value of a corresponding pixel point in the original image to obtain each a target pixel value of the pixel points, comprising: using a ratio of the first pixel value of each pixel point in the mask map to a maximum pixel value, and comparing the ratio to a corresponding pixel point in the original image Multiplying the second pixel values to obtain first data; acquiring a difference between the maximum pixel value and the first pixel value; adding the first data to the difference to obtain each pixel point The target pixel value.
  • the acquiring the first pixel value of each pixel of the mask map and the second pixel value of each pixel in the original image comprises: extracting each of the mask maps The first pixel value of the pixel points, using the first pixel value of each pixel point, to form a first matrix of the mask map, wherein the first pixel value of each pixel point is in the The position in the first matrix is determined by the position of the pixel in the mask map; and the second pixel value of each pixel in the original image is extracted, using the a second pixel value constituting a second matrix of the original image, wherein a position of the second pixel value of each pixel in the second matrix is determined by the pixel in the original image The location is determined.
  • And combining the first pixel value of each pixel in the mask map with the second pixel value of a corresponding pixel in the original image to obtain a target pixel of each pixel a value includes: dividing the first matrix by a maximum pixel value to obtain a third matrix; and comparing a pixel value of each pixel in the third matrix with the corresponding pixel in the second matrix Multiplying two pixel values to obtain a fourth matrix; respectively subtracting the first pixel value of each pixel point in the first matrix by using the maximum pixel value to obtain a fifth matrix; and The matrix and the fifth matrix are added to obtain a sixth matrix, wherein a pixel value of each pixel in the sixth matrix is the target pixel value of each pixel.
  • the method further includes: performing the lesion in the target image The area is diagnosed to obtain a diagnosis.
  • the method before acquiring the first image, further comprises: acquiring a sample image; acquiring annotation data of the sample image, the annotation data including the lesion region in the sample image Position information; and through the sample image and annotation data, train the neural network to form a neural network having the desired function.
  • the method prior to training the neural network to form a neural network having a desired function by the sample image and the annotation data, the method further comprises: randomly extracting a sample image of the selected ratio for hair supplementation; and / or, randomly extract the sample image of the selected scale for color enhancement processing.
  • the location information includes a center coordinate and a radius value of the lesion region.
  • the position information includes a center coordinate, a major axis radius value, and a short axis radius value of the lesion region.
  • the location information further includes a center coordinate and a radius value of the sample image.
  • prior to inputting the first image into the neural network to obtain location information of the lesion region in the first image further comprising pre-processing the first image.
  • pre-processing the first image includes: obtaining a size of the first image; comparing a size of the first image with an image resolution parameter of the input layer of the neural network to determine the first image Whether the size is greater than an image resolution parameter of the input layer; responsive to determining that the size of the first image is greater than an image resolution parameter of the input layer, cropping or reducing the first image; and responsive to determining a size of the first image The first image is enlarged by less than the image resolution parameter of the input layer.
  • the step of acquiring a boundary of the lesion region in the first image to obtain an original map and a mask map including the lesion region from the first image is based on the position information and an image edge detection algorithm.
  • acquiring the first image includes scanning a portion of the skin that diagnoses the patient for diagnosis to form a first image.
  • fusing the mask map and the original map includes bitwise ANDing the mask map and the original map.
  • an image processing apparatus comprising: a first acquisition module configured to acquire a first image to be diagnosed, the first image comprising an image formed by a skin part to be diagnosed by a patient to be diagnosed; a machine learning module configured to input the first image into a neural network to acquire position information of the lesion area in the first image; and an extraction module configured to acquire a boundary of the lesion area in the first image, from the Obtaining an original image and a mask map including the lesion region in the first image; and a fusion module configured to fuse the mask map and the original image to obtain a target image corresponding to the lesion region;
  • the target image is an image for diagnosing the lesion region, and pixel points in the target image are in one-to-one correspondence with pixel points in the original image and the mask map.
  • a computer apparatus comprising: a processor; a memory; and computer program instructions stored in the memory, the program being caused to be processed when the computer program instructions are executed by the processor
  • the apparatus performs one or more steps of an image processing method provided by at least one embodiment of the present disclosure.
  • a non-transitory computer readable storage medium having stored thereon a computer program, when the computer program instructions are executed by the processor, causing the processor to perform at least the present disclosure
  • One or more steps of an image processing method provided by an embodiment One or more steps of an image processing method provided by an embodiment.
  • FIG. 1 is a schematic flowchart diagram of an image processing method according to some embodiments of the present disclosure
  • FIG. 2 is a mask diagram including a lesion area according to some embodiments of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of another image processing method according to some embodiments of the present disclosure.
  • FIG. 5 is a schematic flowchart diagram of still another image processing method according to some embodiments of the present disclosure.
  • FIG. 6 is a schematic diagram of labeling a sample image according to some embodiments of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to some embodiments of the present disclosure.
  • FIG. 8 is a schematic structural diagram of another image processing apparatus according to some embodiments of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a computer device according to some embodiments of the present disclosure.
  • CNN Convolutional Neural Networks
  • CNN convolutional neural networks with multiple convolutional layers
  • the developed deep learning method classifies and identifies images.
  • a traditional convolutional neural network usually consists of an input layer, a convolutional layer, a pooled layer, and a fully connected layer, that is, INPUT (input layer)-CONV (convolution layer)-POOL (pooling layer)-FC (full Connection layer).
  • the convolution layer performs feature extraction; the pooling layer reduces the dimension of the input feature map; the full connection layer is used to connect all the features and output.
  • convolutional neural networks in addition to the traditional convolutional neural networks listed above, can be full convolutional neural networks FCN, segmentation networks SegNet, hole convolution Dilated Convolutions, depth based on atrous convolution Neural network DeepLab (V1 & V2), deep-scale neural network DeepLab (V3) based on multi-scale convolution, multi-channel segmentation neural network RefineNet and so on.
  • FIG. 1 is a schematic flowchart diagram of an image processing method according to some embodiments of the present disclosure.
  • the image processing method includes the following steps:
  • images of skin sites may be formed by a variety of skin imaging devices, which are not limited in this disclosure, such as dermabrasion, skin ultrasound, laser confocal scanning microscopy, and the like, which are suitable for use in the present disclosure.
  • a skin portion to be diagnosed by the patient to be diagnosed may be placed in a photographing area of the dermabrasion, and photographed by the dermabrasion to obtain a first image for diagnosis.
  • the dermatoscope used may be a polarized dermatoscope or an unpolarized dermatoscope.
  • a camera on a mobile terminal such as a mobile phone, a tablet, or a camera is used to photograph a skin portion to be diagnosed, and a first image for diagnosis is obtained.
  • S102 Input the first image into the neural network, and obtain location information of the lesion region in the first image.
  • the step of pre-processing the first image is also included.
  • the size of the first image such as the length and width of the image, is obtained, and the size of the first image is compared to the image resolution parameter of the input layer of the neural network used.
  • the size of the first image is larger than the image resolution parameter of the input layer, the first image is cropped or reduced; when the size of the first image is smaller than the image resolution parameter of the input layer, the first image is enlarged.
  • the neural network input layer (INPUT) used has an image resolution parameter of 32*32, and the first image resolution is 600*600, and the first image can be scaled to a resolution of 32*32.
  • the image resolution parameter of the neural network input layer (INPUT) used is 32*32
  • the first image resolution is 10*10
  • the first image can be stretched to a resolution of 32*32.
  • the neural network processes the first image and outputs location information of the lesion region in the first image.
  • the location information may include a central coordinate of the lesion area, a radius value, and the like. Thus, based on the center coordinates and the radius value, a circular area including the lesion area can be obtained.
  • the position information may include a center coordinate of the lesion area, a long axis radius value, a short axis radius value, and the like.
  • a center coordinate of the lesion area a long axis radius value, a short axis radius value, and the like.
  • the location of the lesion area may also be described by other closed geometry, such as a rectangular area including the lesion area or any other shape shaped area.
  • a circular area or an elliptical area including the lesion area is obtained from the first image, which is advantageous for narrowing the range of the localized lesion area.
  • the accuracy of positioning the skin lesion area by the overall medical staff can be improved by the neural network.
  • the location information is combined by an image edge detection algorithm to obtain an original map and a mask map including the lesion area.
  • a circular area including a lesion area can be obtained based on the center coordinates and the radius value in the position information.
  • an image edge detection algorithm such as a Snake algorithm, a GAC (Generalized Arc Consistency) algorithm, a level set algorithm, etc.
  • a boundary of a lesion area is searched from a circular area, and a lesion area is obtained from the first image.
  • the original map and mask map of the circular area is obtained from the first image.
  • FIG. 2 is a mask diagram including a lesion area, wherein the white portion in FIG. 2 is a mask map corresponding to the lesion area.
  • Fusion processing includes, but is not limited to, bit and operation of the mask map and the original map.
  • the target image includes only the image of the lesion area, and does not include the image of the non-lesion area
  • the target image is an image for diagnosis of the lesion area
  • FIG. 3 is a target image of the obtained lesion area according to the mask diagram shown in FIG.
  • an embodiment is used to describe how to obtain a target image of a lesion region by using the pixel value of the pixel point in the mask map and the pixel value of the pixel point in the original image.
  • the image processing method includes the following steps:
  • S402. Input the first image into the neural network, and obtain location information of the lesion region in the first image.
  • the steps S401-S403 are similar to the steps S101-S103 in the foregoing embodiment, and are not described herein again.
  • the first pixel value of each pixel in the mask map and the second pixel value of each pixel in the original image are respectively obtained.
  • the number of pixels in the mask map and the original image are the same, and the pixel point positions correspond.
  • the first pixel value of the pixel in the mask is merged with the second pixel value of the corresponding pixel in the original image, and the target pixel value of the pixel is obtained.
  • the first pixel value is compared to the maximum pixel value, and the ratio is multiplied by the second pixel value of the corresponding pixel in the original image to obtain the first data. Then, the difference between the maximum pixel value and the first pixel value is made. Finally, the value obtained by adding the first data to the difference is the target pixel value of the pixel. As shown in formula (1).
  • d pixel is the target pixel value of the pixel point
  • a mask is the first pixel value of the pixel point in the mask map
  • max is the maximum pixel value 255
  • b org is the second pixel value of the corresponding pixel point in the original image.
  • the target pixel value of each pixel is obtained according to formula (1), the target pixel value of all the pixels is placed at a position corresponding to the pixel in the original map or the mask map, and the lesion region only is obtained.
  • Target image After the target pixel value of each pixel is obtained according to formula (1), the target pixel value of all the pixels is placed at a position corresponding to the pixel in the original map or the mask map, and the lesion region only is obtained. Target image.
  • the pixel value of each pixel in the mask map is blended with the pixel value of the corresponding pixel in the original image to obtain a target pixel value of the pixel, so that the image of the lesion region can be extracted.
  • the training method of the neural network may include the following steps:
  • an image of a skin portion of a previously diagnosed patient may be obtained from a hospital as a sample image.
  • the position data of the lesion area in the sample image can be marked by manual labeling.
  • the center coordinate and the radius value of the lesion area in the sample image, and the center coordinate and the radius value of the sample image can be labeled, thereby obtaining the annotation data of the sample image.
  • the annotation data includes a central coordinate and a radius value of the lesion region in the sample image, a center coordinate and a radius value of the sample image.
  • the center coordinate of a sample image manually marked is (x1, y1), the radius value is r1, the center coordinate of the lesion area in the sample image is (x2, y2), and the radius value is r2.
  • a plurality of sample data sets for training a neural network can be obtained, each sample in the data set including a sample image and an annotation corresponding to the sample image.
  • the training process of neural networks is generally known in the art.
  • the parameters of the neural network are continuously optimized to form a neural network having the function of acquiring the position information of the lesion region in the first image.
  • the input of the input layer is an image
  • the output of the output layer is position information of the lesion area.
  • the neural network formed by the training is performed as described in the above embodiment. As shown below, a specific embodiment is shown.
  • S505. Input the first image into the neural network, and obtain location information of the lesion region in the first image.
  • Steps S504-S506 are similar to steps S101-S103 in the foregoing embodiment, and therefore are not described herein again.
  • the first pixel value of each pixel in the mask map is obtained, and the first matrix value of each pixel point is utilized to form a first matrix of the mask map.
  • the position of the first pixel value of each pixel in the first matrix is determined by the position of the pixel in the mask map.
  • the first pixel value of the first row of pixels in the mask may be used as the first row element of the first matrix, and the first pixel value of the second row of pixels in the mask is used as the first matrix.
  • the two-row element that is, the number of rows and columns of pixels in the mask map, is the same as the number of rows and columns of the first matrix.
  • the first pixel value of the first row of pixels in the mask may be used as the first column element of the first matrix, and the first pixel value of the second row of pixels in the mask is used as the first matrix.
  • the second column element Therefore, the number of rows and the number of columns of the pixel points in the mask map correspond to the number of columns of the first matrix and the number of rows.
  • the second pixel value of each pixel in the original image is extracted, and the second matrix value of each pixel is used to form a second matrix of the original image.
  • the position of the second pixel value of each pixel in the second matrix is determined by the position of the pixel in the original image.
  • the first matrix can be divided by the maximum pixel value, that is, each first pixel value in the first matrix is divided by the maximum pixel value to obtain a third matrix. Then, the pixel value of each pixel in the third matrix is multiplied by the second pixel value of the corresponding pixel in the second matrix to obtain a fourth matrix. The maximum pixel value is respectively subtracted from the first pixel value of each pixel in the first matrix to obtain a fifth matrix. Finally, the fourth matrix is added to the fifth matrix to obtain a sixth matrix. The pixel value of each pixel in the sixth matrix is the target pixel value of the pixel. As shown in equations (2) and (3).
  • C is the third matrix
  • a mask is the first matrix of the mask map
  • max is the maximum pixel value of 255.
  • D is a fourth matrix obtained by multiplying the pixel value of each pixel in the third matrix C by the second pixel value of the corresponding pixel in the second matrix B org ; the value of each element in the matrix MAX is The maximum pixel value, and the number of rows and columns of the matrix, is the same as the number of rows and columns of the first matrix A mask ; MAX-A mask obtains the fifth matrix; D pixel represents the sixth matrix.
  • the number of rows and the number of columns of the fourth matrix in the above embodiment are the same as the number of rows and columns of the first matrix.
  • the target pixel value of the pixel point is calculated by the matrix formed by the mask image and the pixel value of the original image, and the calculation speed is improved.
  • the target pixel value of each pixel in the sixth matrix is placed at a position corresponding to the pixel point in the original image or the mask map, and the target image of the lesion region can be formed.
  • the diagnosis process of the S510 can be performed by a doctor or by a computer aided diagnosis (Computer Aided Diagnosis) software.
  • an image of the lesion area that has been diagnosed may be acquired as a sample image, and based on the diagnosis result of the image, and then using the sample image and the corresponding diagnosis result, a neural network for diagnosis is constructed such that The diagnosed neural network converges or the error is stable within the allowable error range. After getting a trained neural network, it can be used to diagnose patients.
  • the target image After obtaining the target image of the lesion area, the target image is input into a pre-trained diagnostic model, and the diagnostic model diagnoses the lesion area in the target image and outputs the diagnosis result.
  • the image of the lesion area is input into the diagnostic model as an image for diagnosis for diagnosis, improving the accuracy of the diagnosis result.
  • a step of pre-processing the sample image during training is also included.
  • the sample images of the selected scale are randomly extracted for hair replenishment processing.
  • a quarter of the sample image can be extracted from the sample image for hair replenishment processing.
  • an image processing method can be used to simulate the rendering of the hair and randomly supplemented to the skin area in the sample image with a certain probability.
  • the sample images of the selected scale are randomly extracted for color enhancement processing.
  • color enhancement includes color saturation, brightness and contrast.
  • An image processing method obtained by at least one embodiment of the present disclosure obtains position information of a lesion area in a captured image through a neural network, and determines a boundary of the lesion area (in some embodiments, for example, based on the obtained position information and image edge detection) Algorithm), obtain the original map and the mask map containing the lesion area, and fuse the original map and the mask map to obtain an image containing only the lesion area from the original image.
  • the image processing method forms a first image for diagnosis by scanning a skin part to be diagnosed by the patient to be diagnosed, and inputs the first image to the neural network to obtain the lesion area in the first image.
  • Position information for example, according to the position information and the image edge detection algorithm, determining the boundary of the lesion area, acquiring the original map and the mask map including the lesion area from the first image, and fusing the mask map and the original image to obtain the lesion area corresponding
  • the target image wherein the target image is an image for diagnosing the lesion region, and the pixel points in the target image are in one-to-one correspondence with the pixel points in the original image and the mask map.
  • Some embodiments of the present disclosure also propose an image processing apparatus.
  • the image processing apparatus includes a first acquisition module 710, a machine learning module 720, an extraction module 730, and a fusion module 740.
  • the first obtaining module 710 is configured to acquire a first image to be diagnosed, where the first image includes an image formed by a skin part to be diagnosed by the patient to be diagnosed;
  • the first acquisition module is integrated with the skin imaging device to scan a portion of the skin to be diagnosed by the patient to form a first image for diagnosis.
  • the skin imaging device scans a portion of the skin to be diagnosed by the patient to be diagnosed to form a first image for diagnosis
  • the first acquisition module is coupled to the skin imaging device to acquire the first image formed
  • the machine learning module 720 is configured to input the first image into the neural network to obtain location information of the lesion region in the first image.
  • the extracting module 730 is configured to acquire a boundary of the lesion region in the first image, and obtain an original map and a mask map including the lesion region from the first image.
  • the fusion module 740 is configured to fuse the mask map and the original image to obtain a target image corresponding to the lesion region, wherein the target image is an image for diagnosis of the lesion region, a pixel point in the target image and the original image, and the The pixel points in the mask map correspond one-to-one.
  • the fusion module 740 includes an acquisition unit 741, a fusion unit 742, and a formation unit 743.
  • the obtaining unit 741 is configured to acquire a first pixel value of each pixel point of the mask map and a second pixel value of each pixel point in the original image.
  • the merging unit 742 is configured to combine the first pixel value of the pixel point in the mask map with the second pixel value of the corresponding pixel point in the original image to obtain the target pixel value of the pixel point.
  • the forming unit 743 is configured to form a target image of the lesion region based on the target pixel values of all the pixel points.
  • the fusion unit 742 is further configured to:
  • the first data is added to the difference to obtain a target pixel value of the pixel.
  • the obtaining unit 741 is further configured to:
  • the position in the position is determined by the position of the pixel in the mask map;
  • the position is determined by the position of the pixel in the original image
  • the fusion unit 742 is also used to:
  • the fourth matrix and the fifth matrix are added to obtain a sixth matrix, wherein the pixel value of each pixel in the sixth matrix is the target pixel value of the pixel.
  • the image processing apparatus further includes:
  • the diagnosis module is configured to input the target image into the diagnosis model for learning, and diagnose the lesion area in the target image to obtain the diagnosis result.
  • the image processing apparatus further includes:
  • a second acquiring module configured to acquire an image of a previously diagnosed patient as a sample image for training the constructed initial neural network
  • a third acquiring module configured to acquire the annotation data of the sample image, where the annotation data includes a location of the lesion region in the sample image;
  • a training module configured to input the sample image and the annotation data into the neural network to train the neural network with the function of acquiring location information of the lesion region in the first image.
  • the image processing apparatus further includes:
  • a pre-processing module for randomly extracting a sample image of a preset ratio for hair supplement processing
  • a sample image of a preset ratio is randomly extracted for color enhancement processing.
  • An image processing apparatus by scanning a skin part to be diagnosed for diagnosis, forming a first image for diagnosis, inputting the first image into a neural network for learning, and acquiring a lesion area at the first
  • the position information in an image is obtained from the first image from the original image and the mask map including the lesion area, and the mask map and the original image are merged to obtain a target image corresponding to the lesion area, wherein the target image is used for For the image in which the lesion is diagnosed, the pixel points in the target image correspond one-to-one with the pixel points in the original image and the mask map.
  • the lesion region in the original image can be distinguished from the non-lesion region, and an image including only the lesion region can be obtained, thereby realizing accurate positioning and extraction of the lesion region from the captured image, and reducing Labor costs.
  • Some embodiments of the present disclosure also provide a computer device comprising a processor; a memory; and computer program instructions stored in the memory, the processor being caused to be executed by the processor when the computer program instructions are executed by the processor.
  • a computer device comprising a processor; a memory; and computer program instructions stored in the memory, the processor being caused to be executed by the processor when the computer program instructions are executed by the processor.
  • Some embodiments of the present disclosure also provide a non-transitory computer readable storage medium having stored thereon computer program instructions that, when executed by the processor, cause the processor to perform at least one of the present disclosure One or more steps of the image processing method provided by the embodiments.
  • FIG. 9 a specific implementation structure of a computer device 800 suitable for implementing embodiments of the present disclosure is shown.
  • the computer device shown in FIG. 9 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • computer system 800 includes one or more processors 801 that can perform various operations in accordance with program instructions stored in memory 802 (eg, program instructions are stored in memory 802, such as a read only memory or disk, and loaded into Random access memory (RAM)).
  • program instructions are stored in memory 802, such as a read only memory or disk, and loaded into Random access memory (RAM)
  • RAM Random access memory
  • various programs and data required for the operation of the computer system 800 are also stored.
  • the processor 801 and the memory 802 are connected to each other through a bus 803.
  • An input/output (I/O) interface 804 is also coupled to bus 803.
  • I/O interface 804 A variety of components can be coupled to I/O interface 804 to effect the output and output of information.
  • an input device 805 including a keyboard, a mouse, etc.
  • an output device 806 including a cathode ray tube (CRT), a liquid crystal display (LCD), and the like
  • a communication device 807 including a network interface card such as a LAN card, a modem, and the like.
  • the communication device 807 performs communication processing via a network such as the Internet.
  • Driver 808 is also connected to I/O interface 803 as needed.
  • a removable medium 809 such as a magnetic disk, an optical disk, a flash memory or the like, is connected or mounted to the drive 808 as needed.
  • the processor 801 can be a central processing unit (CPU) or a field programmable logic array (FPGA) or a single chip microcomputer (MCU) or a digital signal processor (DSP) or an application specific integrated circuit (ASIC), etc., having data processing capabilities and/or A logic operation device for program execution capability.
  • CPU central processing unit
  • FPGA field programmable logic array
  • MCU single chip microcomputer
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the bus 803 may be a Front Side Bus (FSB), a QuickPath Interconnect (QPI), a direct media interface (DMI), a Peripheral Component Interconnect (PCI), a Peripheral Component Interconnect Express (PCI-E), a HyperTransport (HT), or the like.
  • FBB Front Side Bus
  • QPI QuickPath Interconnect
  • DMI direct media interface
  • PCI Peripheral Component Interconnect
  • PCI-E Peripheral Component Interconnect Express
  • HT HyperTransport
  • an embodiment of the present disclosure includes a computer program product comprising a computer program carried on a computer readable medium, the computer program comprising program code for performing the image processing method of at least one embodiment of the present disclosure.
  • the computer program can be downloaded and installed from the network via communication device 807, and/or installed from removable media 809.
  • the above-described functions defined in the system of the present disclosure are executed when the computer program is executed by the processor 801.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” and “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing the steps of a custom logic function or process.
  • the scope of the preferred embodiments of the present disclosure includes additional implementations, in which the functions may be performed in a substantially simultaneous manner or in an inverse order depending on the functions involved, in the order shown or discussed. It will be understood by those skilled in the art to which the embodiments of the present disclosure pertain.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with such an instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the present disclosure can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present disclosure have been shown and described above, it is understood that the foregoing embodiments are illustrative and are not to be construed as limiting the scope of the disclosure The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

一种图像处理方法及装置,其中,图像处理方法包括:获取第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像(S101);将第一图像输入到神经网络中,获取病变区域在第一图像中的位置信息(S102);获取第一图像中病变区域的边界,从第一图像中获取包括病变区域的原始图和掩码图(S103);将掩码图和原始图进行融合,得到病变区域对应的目标图像(S104);其中,目标图像为用于对病变区域进行诊断的图像,目标图像中的像素点与原始图和掩码图中的像素点一一对应。

Description

图像处理方法及装置
相关申请的交叉引用
本申请要求于2017年8月23日提交的发明名称为“图像处理方法及装置”的中国专利申请第201710730705.2号的优先权,该申请的公开通过引用被全部结合于此。
技术领域
本公开涉及图像处理方法及装置。
背景技术
对于皮肤病患者,主要是通过对待诊断患者的疑似患病区域及周围区域进行拍摄,然后由经验丰富的医生对拍摄图像中的疑似患病区域进行人工定位,最后基于人工定位结果对疑似患病区域进行确诊。
发明内容
根据本公开的一些实施例,提供一种图像处理方法,包括:获取第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像;将所述第一图像输入到神经网络中,获取病变区域在所述第一图像中的位置信息;获取第一图像中病变区域的边界,从所述第一图像中获取包括所述病变区域的原始图和掩码图;和将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像;其中,所述目标图像为用于对所述病变区域进行诊断的图像,所述目标图像中的像素点与所述原始图和所述掩码图中的像素点一一对应。
在一些实施例中,所述将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像,包括:获取所述掩码图的每个像素点的第一像素值和所述原始图中每个像素点的第二像素值;将所述掩码图中的每个像素点的所述第一像素值,与所述原始图中对应像素点的所述第二像素值进行融合,获取到每个像素点的目标像素值;和根据所有像素点的所述目标像素值,形成所述病变区域的所述目标图像。
在一些实施例中,所述将所述掩码图中的每个像素点的所述第一像素值,与所述原始图中对应像素点的所述第二像素值进行融合,获取到每个像素点的目标像素值,包括:利用所述掩码图中的每个像素点的所述第一像素值与最大像素值做比值,将所述比值与所述 原始图中对应像素点的所述第二像素值相乘,得到第一数据;获取所述最大像素值与所述第一像素值的差值;将所述第一数据与所述差值相加,得到每个像素点的所述目标像素值。
在一些实施例中,所述获取所述掩码图的每个像素点的第一像素值和所述原始图中每个像素点的第二像素值,包括:提取所述掩码图中每个像素点的所述第一像素值,利用每个像素点的所述第一像素值,构成所述掩码图的第一矩阵,其中,每个像素点的所述第一像素值在所述第一矩阵中的位置是由所述像素点在所述掩码图中的位置确定的;和提取所述原始图中每个像素点的第二像素值,利用每个像素点的所述第二像素值,构成所述原始图的第二矩阵,其中,每个像素点的所述第二像素值在所述第二矩阵中的位置是由所述像素点在所述原始图中的位置确定的。所述将所述掩码图中的每个像素点的所述第一像素值,与所述原始图中对应像素点的所述第二像素值进行融合,获取到每个像素点的目标像素值,包括:将所述第一矩阵与最大像素值相除,得到第三矩阵;将所述第三矩阵中每个像素点的像素值与所述第二矩阵中对应像素点的所述第二像素值相乘,得到第四矩阵;利用所述最大像素值分别与所述第一矩阵中每个像素点的所述第一像素值相减,得到第五矩阵;和将所述第四矩阵和所述第五矩阵相加,得到第六矩阵,其中,所述第六矩阵中每个像素点的像素值为每个像素点的所述目标像素值。
在一些实施例中,在所述将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像之后,所述方法还包括:对所述目标图像中的所述病变区域进行诊断以获取诊断结果。
在一些实施例中,在获取所述第一图像之前,所述方法还包括:获取样本图像;获取所述样本图像的标注数据,所述标注数据包括所述病变区域在所述样本图像中的位置信息;和通过所述样本图像和标注数据,训练神经网络,形成具有所需功能的神经网络。
在一些实施例中,在通过所述样本图像和标注数据,训练神经网络,形成具有所需功能的神经网络之前,所述方法还包括:随机抽出所选比例的样本图像进行毛发补充处理;和/或,随机抽出所选比例的样本图像进行颜色增强处理。
在一些实施例中,所述位置信息包括所述病变区域的中心坐标和半径值。
在一些实施例中,所述位置信息包括所述病变区域的中心坐标、长轴半径值和短轴半径值。
在一些实施例中,所述位置信息还包括所述样本图像的中心坐标和半径值。
在一些实施例中,在将第一图像输入到神经网络中来获取病变区域在第一图像中的位置信息之前,还包括对第一图像进行预处理。
在一些实施例中,对第一图像进行预处理包括:获取第一图像的尺寸;将第一图像的尺寸与所述神经网络的输入层的图像分辨率参数进行对比,来确定第一图像的尺寸是否大 于所述输入层的图像分辨率参数;响应于确定第一图像的尺寸大于所述输入层的图像分辨率参数,对第一图像进行裁剪或者缩小;以及响应于确定第一图像的尺寸小于输入层的图像分辨率参数,将第一图像进行放大。
在一些实施例中,获取第一图像中病变区域的边界以从所述第一图像中获取包括所述病变区域的原始图和掩码图的步骤基于所述位置信息和图像边缘检测算法。
在一些实施例中,获取第一图像包括对诊断患者需要诊断的皮肤部位进行扫描来形成第一图像。
在一些实施例中,将所述掩码图和所述原始图进行融合包括将掩码图和原始图进行位与操作。
根据本公开另一些实施例,提供一种图像处理装置,包括:第一获取模块,配置成获取待诊断的第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像;机器学习模块,配置成将所述第一图像输入到神经网络中,获取病变区域在所述第一图像中的位置信息;提取模块,配置成获取第一图像中病变区域的边界,从所述第一图像中获取包括所述病变区域的原始图和掩码图;融合模块,配置成将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像;其中,所述目标图像为用于对所述病变区域进行诊断的图像,所述目标图像中的像素点与所述原始图和所述掩码图中的像素点一一对应。
根据本公开另一些实施例,提供一种计算机设备,包括处理器;存储器;和存储在所述存储器中的计算机程序指令,在所述计算机程序指令被所述处理器运行时,使得所述处理器执行本公开至少一个实施例所提供的图像处理方法的一个或多个步骤。
根据本公开另一些实施例,提供一种非临时性计算机可读存储介质,其上存储有计算机程序,在所述计算机程序指令被所述处理器运行时,使得所述处理器执行本公开至少一个实施例所提供的图像处理方法的一个或多个步骤。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开一些实施例提供的一种图像处理方法的流程示意图;
图2为本公开一些实施例提供的一个包括病变区域的掩码图;
图3为本公开一些实施例提供的一个病变区域的图像;
图4为本公开一些实施例提供的另一种图像处理方法的流程示意图;
图5为本公开一些实施例提供的又一种图像处理方法的流程示意图;
图6为本公开一些实施例提供的一个样本图像的标注示意图;
图7为本公开一些实施例提供的一种图像处理装置的结构示意图;
图8为本公开一些实施例提供的另一种图像处理装置的结构示意图;
图9为本公开一些实施例提供的一种计算机设备的结构示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
在本公开的实施例中,所称的神经网络,在目标识别、目标检测、目标分类等诸多处理图像的应用中显示了良好的应用性能。卷积神经网络(Constitutional Neural Networks,CNN),例如包含多个卷积层的卷积神经网络,由于可以通过不同的卷积层检测图像中不同区域和维度的特征,从而使得基于卷积神经网络发展起来的深度学习方法对图像进行分类和识别。
已经发展了多种结构的卷积神经网络。一种传统的卷积神经网络通常由输入层、卷积层、池化层、全连接层组成,即INPUT(输入层)-CONV(卷积层)-POOL(池化层)-FC(全连接层)。其中卷积层进行特征提取;池化层对输入的特征图进行降维;全连接层用于连接所有的特征,并进行输出。
如上所述,申请人以卷积神经网络描述了神经网络在图像处理领域应用的基本概念,这仅是示意性的。在机器学习领域,存在多种结构的神经网络可用于图像处理等应用。即使是卷积神经网络,除上述所列举的传统的卷积神经网络,还可以是全卷积神经网络FCN、分割网络SegNet、空洞卷积Dilated Convolutions、基于带孔卷积(atrous convolution)的深度神经网络DeepLab(V1&V2)、基于多尺度卷积的深度神经网络DeepLab(V3)、多通道分割神经网络RefineNet等。
下面参考附图描述本公开一些实施例的图像处理方法及装置。
图1为本公开一些实施例提供的一种图像处理方法的流程示意图。
如图1所示,该图像处理方法包括以下步骤:
S101,获取第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图 像。
在本公开的实施例中,可以通过多种皮肤成像装置形成皮肤部位的图像,本公开对此不做限制,诸如皮肤镜、皮肤超声、激光共聚焦扫描显微镜等均可适用于本公开。
例如,可将待诊断患者需要诊断的皮肤部位放置在皮肤镜的拍摄区域,通过皮肤镜进行拍摄,获得用于诊断的第一图像。
其中,所用的皮肤镜可以采用偏振光皮肤镜或非偏振光皮肤镜。
例如,利用手机、平板、相机等移动终端上的摄像头,拍摄需要诊断的皮肤部位,获得用于诊断的第一图像。
S102,将第一图像输入到神经网络中,获取病变区域在第一图像中的位置信息。
在一些实施例中,还包括对第一图像进行预处理的步骤。例如,获取第一图像的尺寸,如图像的长度和宽度,将第一图像的尺寸与所用的神经网络的输入层的图像分辨率参数进行对比。当第一图像的尺寸大于输入层的图像分辨率参数时,对第一图像进行裁剪或者缩小;当第一图像的尺寸小于输入层的图像分辨率参数时,将第一图像进行放大。
例如,所用的神经网络输入层(INPUT)的图像分辨率参数为32*32,第一图像分辨率为600*600,可以将第一图像缩放至分辨率为32*32。
例如,所用的神经网络输入层(INPUT)的图像分辨率参数为32*32,第一图像分辨率为10*10,可以将第一图像拉伸至分辨率为32*32。
神经网络对第一图像进行处理,输出病变区域在第一图像中的位置信息。
在一些实施例中,位置信息可包括病变区域的中心坐标、半径值等。从而,根据中心坐标和半径值,可以得到包括病变区域的圆形区域。
在一些实施例中,位置信息可包括病变区域的中心坐标、长轴半径值、短轴半径值等。从而,根据中心坐标、长轴半径值、短轴半径值,可以得到包括病变区域的椭圆形区域。
本领域技术人员可以理解,还可以通过其它封闭几何图形描述病变区域的位置,例如可以是包括病变区域的矩形区域或其他任意形状的区域。
通过神经网络,从第一图像中获取包括病变区域的圆形区域或椭圆形区域等,有利于缩小定位病变区域的范围。此外,由于具有诊断皮肤病变丰富经验的医疗专家有限,通过神经网络可以提高整体医疗人员定位皮肤病变区域的精度。
S103,获取第一图像中病变区域的边界,从第一图像中获取包括病变区域的原始图和掩码图。
在一些实施例中,通过图像边缘检测算法,结合位置信息,以获得包括病变区域的原始图和掩码图。
例如,根据位置信息中的中心坐标和半径值,可以获得包括病变区域的圆形区域。
例如,根据图像边缘检测算法,如Snake算法、GAC(Generalized Arc Consistency)算法、水平集(Level Set)算法等,从圆形区域中寻找病变区域的边界,并从第一图像中获取包括病变区域的圆形区域的原始图和掩码图。
例如,图2是一包括病变区域的掩码图,其中,图2中的白色部分是病变区域对应的掩码图。
S104,将掩码图和原始图进行融合,得到病变区域对应的目标图像。
在获取包括病变区域的原始图和掩码图后,将掩码图和原始图进行融合处理,以获取仅包含病变区域的目标图像。融合处理包括但不限于将掩码图和原始图进行位与操作。
其中,目标图像中仅包括病变区域的图像,不包括非病变区域的图像,目标图像为用于对病变区域进行诊断的图像,目标图像中的像素点与原始图和掩码图中的像素点一一对应。如图3是根据图2所示的掩码图,得到的病变区域的目标图像。
下面通过一个实施例,介绍如何通过掩码图中像素点的像素值与原始图中的像素点的像素值,得到病变区域的目标图像。
如图4所示,该图像处理方法包括以下步骤:
S401,获取第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像。
S402,将第一图像输入到神经网络中,获取病变区域在第一图像中的位置信息。
S403,获取第一图像中病变区域的边界,从第一图像中获取包括病变区域的原始图和掩码图。
步骤S401-S403与前述实施例中的步骤S101-S103类似,在此不再赘述。
S404,获取掩码图的每个像素点的第一像素值和原始图中每个像素点的第二像素值。
在获取掩码图和原始图后,分别获取掩码图中的每个像素点的第一像素值,和原始图中每个像素点的第二像素值。
在一些实施例中,掩码图和原始图中像素点的个数相同,且像素点位置对应。
S405,将掩码图中的像素点的第一像素值,与原始图中对应像素点的第二像素值进行融合,获取到像素点的目标像素值。
在一些实施例中,针对每个像素点,将第一像素值与最大像素值做比值,并将比值与原始图中对应像素点的第二像素值相乘,得到第一数据。然后,将最大像素值与第一像素值做差值。最后,将第一数据与差值相加得到的值为像素点的目标像素值。如公式(1)所示。
Figure PCTCN2018101242-appb-000001
其中,d pixel为像素点的目标像素值,a mask为掩码图中像素点的第一像素值,max为最大像素值255,b org为原始图中对应像素点的第二像素值。由此,
Figure PCTCN2018101242-appb-000002
为第一数据,max-a mask表示最大像素值与第一像素值做差值。
S406,根据所有像素点的目标像素值,形成病变区域的目标图像。
在根据公式(1)获取每个像素点的目标像素值后,将所有像素点的目标像素值,放在与原始图或者掩码图中的像素点对应的位置,可得到仅包含病变区域的目标图像。
在一些实施例中,将掩码图中每个像素点的像素值与原始图中对应像素点的像素值进行融合,得到像素点的目标像素值,从而可以提取到病变区域的图像。
在一些实施例中,还包括在对待诊断患者需要诊断的皮肤部位进行扫描,获取用于诊断的第一图像之前,对神经网络进行训练使其具有所需的获取病变区域在第一图像中的位置信息的功能。所述神经网络的训练方法,如图5所示,可以包括如下步骤:
S501,获取样本图像。
在一些实施例中,可从医院获取已往诊断过的患者的皮肤部位的图像作为样本图像。
S502,获取样本图像的标注数据。
在获取样本图像后,可通过人工标注的方式,标注出病变区域在样本图像中的位置数据。
例如,以圆形为例,可以标注病变区域在样本图像中的中心坐标和半径值,以及样本图像的中心坐标和半径值,从而获得样本图像的标注数据。其中,标注数据包括病变区域在样本图像中的中心坐标和半径值,样本图像的中心坐标和半径值。
如图6所示,人工标注某一个样本图像的中心坐标为(x1,y1)、半径值为r1,病变区域在样本图像中的中心坐标为(x2,y2),半径值为r2。
通过上述步骤,可以获得多个用于训练神经网络的样本数据集,数据集中的每个样本包括样本图像以及与该样本图像对应的标注。
S503,通过所述样本图像和标注数据,训练神经网络,形成具有所需功能的神经网络。
神经网络的训练过程是为本领域普遍所了解的。通过将样本图像和标注数据输入到初始参数的神经网络,不断优化神经网络的参数,形成具有获取病变区域在第一图像中的位置信息的功能的神经网络。所形成的神经网络,输入层的输入为图像,输出层的输出为病变区域的位置信息。
训练所形成的神经网络,执行过程如上述实施例所述。如下所示,显示了一个具体的 实施例。
S504,获取第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像。
S505,将第一图像输入到神经网络中,获取病变区域在第一图像中的位置信息。
S506,获取第一图像中病变区域的边界,从第一图像中获取包括病变区域的原始图和掩码图。
步骤S504-S506与前述实施例中的步骤S101-S103类似,故在此不再赘述。
S507,获取掩码图的第一矩阵和原始图的第二矩阵。
在一些实施例中,获取掩码图中每个像素点的第一像素值,并利用每个像素点的第一像素值,构成掩码图的第一矩阵。其中,每个像素点的第一像素值在第一矩阵中的位置是由像素点在掩码图中的位置确定的。
作为一个示例,可将掩码图中第一行像素点的第一像素值作为第一矩阵的第一行元素,掩码图中第二行像素点的第一像素值作为第一矩阵的第二行元素,也就是掩码图中像素点的行数与列数,与第一矩阵的行数与列数相同。
作为另一个示例,可将掩码图中第一行像素点的第一像素值作为第一矩阵的第一列元素,掩码图中第二行像素点的第一像素值作为第一矩阵的第二列元素。从而,掩码图中像素点的行数与列数,与第一矩阵的列数与行数对应。
同理,提取原始图中每个像素点的第二像素值,利用每个像素点的第二像素值,构成原始图的第二矩阵。其中,每个像素点的第二像素值在第二矩阵中的位置是由像素点在原始图中的位置确定的。
S508,根据第一矩阵和第二矩阵获得像素点的目标像素值。
在一些实施例中,可将第一矩阵与最大像素值相除,也就是第一矩阵中的每个第一像素值与最大像素值相除,得到第三矩阵。然后,将第三矩阵中每个像素点的像素值与第二矩阵中对应像素点的第二像素值相乘,得到第四矩阵。再利用最大像素值分别与第一矩阵中每个像素点的第一像素值相减,得到第五矩阵。最后,将第四矩阵与第五矩阵相加,得到第六矩阵。其中,第六矩阵中每个像素点的像素值为像素点的目标像素值。如公式(2)和(3)所示。
Figure PCTCN2018101242-appb-000003
D pixel=D+(MAX-A mask)      (3)
其中,C为第三矩阵,A mask为掩码图的第一矩阵,max为最大像素值255。D为第四矩 阵,是由第三矩阵C中每个像素点的像素值与第二矩阵B org中对应像素点的第二像素值相乘得到的;矩阵MAX中每个元素的值均为最大像素值,且矩阵的行数与列数,与第一矩阵A mask的行数与列数相同;MAX-A mask得到第五矩阵;D pixel表示第六矩阵。
需要注意的是,上述实施例中第四矩阵的行数与列数与第一矩阵的行数与列数相同。
通过由掩码图和原始图的像素值形成的矩阵,来计算像素点的目标像素值,提高了计算速度。
S509,根据所有像素点的目标像素值,形成病变区域的目标图像。
在根据步骤S508计算得到第六矩阵后,将第六矩阵中每个像素点的目标像素值,放在与原始图或者掩码图中像素点对应的位置,可形成病变区域的目标图像。
S510,对目标图像中的病变区域进行诊断以获取诊断结果。
S510的诊断过程可以通过医生进行,也可以通过计算机辅助诊断(Computer Aided Diagnosis)软件输出诊断结果。
在一些实施例中,可以采集已经诊断过的病变区域的图像作为样本图像,以及基于对该图像的诊断结果,然后利用样本图像和对应的诊断结果,构建用于诊断的神经网络,使得用于诊断的神经网络收敛或者误差稳定在允许的误差范围内。在得到训练好的神经网络后,可以用于对患者进行诊断。
尽管上述以神经网络说明计算机辅助诊断的实现,其他的机器学习技术,也可以通过训练以形成诊断模型。
在获得病变区域的目标图像后,将目标图像输入到预先训练好的诊断模型中,诊断模型对目标图像中的病变区域进行诊断,并输出诊断结果。
在一些实施例中,将病变区域的图像作为用于诊断的图像输入到诊断模型中进行诊断,提高了诊断结果的准确性。
在一些实施例中,为了提高用于诊断的神经网络的鲁棒性,还包括在训练时对样本图像进行预处理的步骤。
在一些实施例中,随机抽取所选比例的样本图像进行毛发补充处理。例如,可从样本图像中抽取四分之一数量的样本图像进行毛发补充处理。例如,可以使用图像处理方法模拟毛发的绘制,并以一定概率随机补充至样本图像中的皮肤区域。
在一些实施例中,随机抽取所选比例的样本图像进行颜色增强处理。其中,颜色增强包括色彩的饱和度、亮度和对比度等方面。
通过对预设比例的样本图像采取毛发补充处理或者颜色增强处理,或者两种处理方式都采取,以增加样本图像的多样性,提高训练出的神经网络输出结果的准确性。
本公开至少一个实施例提供的图像处理方法,通过神经网络获得病变区域在拍摄图像中的位置信息,并确定病变区域的边界(在一些实施例中,例如基于所得到的位置信息和图像边缘检测算法),得到包含病变区域的原始图和掩码图,将原始图和掩码图进行融合,即可从原始图中获得仅包含病变区域的图像。
本公开至少一个实施例提供的图像处理方法,通过对待诊断患者需要诊断的皮肤部位进行扫描,形成用于诊断的第一图像,将第一图像输入到神经网络获取病变区域在第一图像中的位置信息,例如根据位置信息和图像边缘检测算法,确定病变区域的边界,从第一图像中获取包括病变区域的原始图和掩码图,将掩码图和原始图进行融合,得到病变区域对应的目标图像,其中,目标图像为用于对病变区域进行诊断的图像,目标图像中的像素点与原始图和掩码图中的像素点一一对应。通过将原始图像中的病变区域与非病变区域区分开,获得仅包含病变区域的图像,实现了从皮肤图像中准确地定位出病变区域,并提取出病变区域的图像。
本公开一些实施例还提出一种图像处理装置。
如图7所示,该图像处理装置包括:第一获取模块710、机器学习模块720、提取模块730、融合模块740。
第一获取模块710用于获取待诊断的第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像;
在一些实施例中,第一获取模块与皮肤成像装置集成在一起,对待诊断患者需要诊断的皮肤部位进行扫描,形成用于诊断的第一图像。
在一些实施例中,皮肤成像装置对待诊断患者需要诊断的皮肤部位进行扫描,形成用于诊断的第一图像,第一获取模块与皮肤成像装置耦接以获取所形成的第一图像。
机器学习模块720用于将第一图像输入到神经网络中,获取病变区域在第一图像中的位置信息。
提取模块730用于获取第一图像中病变区域的边界,从第一图像中获取包括病变区域的原始图和掩码图。
融合模块740用于将掩码图和原始图进行融合,得到病变区域对应的目标图像,其中,目标图像为用于对病变区域进行诊断的图像,目标图像中的像素点与原始图和所述掩码图中的像素点一一对应。
在一些实施例中,如图8所示,融合模块740包括:获取单元741、融合单元742、形成单元743。
获取单元741用于获取掩码图的每个像素点的第一像素值和原始图中每个像素点的第二像素值。
融合单元742用于将掩码图中的像素点的第一像素值,与原始图中对应像素点的第二像素值进行融合,获取到像素点的目标像素值。
形成单元743用于根据所有像素点的目标像素值,形成病变区域的目标图像。
在一些实施例中,融合单元742还用于:
针对每个像素点,利用第一像素值与最大像素值做比值,将比值与第二像素值相乘,得到第一数据;
获取最大像素值与第一像素值的差值;
将第一数据与差值相加,得到像素点的目标像素值。
在一些实施例中,获取单元741还用于:
提取掩码图中每个像素点的第一像素值,利用每个像素点的第一像素值,构成掩码图的第一矩阵;其中,每个像素点的第一像素值在第一矩阵中的位置是由像素点在掩码图中的位置确定的;
提取原始图中每个像素点的第二像素值,利用每个像素点的第二像素值,构成原始图的第二矩阵;其中,每个像素点的第二像素值在第二矩阵中的位置是由像素点在原始图中的位置确定的;
融合单元742还用于:
将第一矩阵与最大像素值相除,得到第三矩阵;
将第三矩阵中每个像素点的像素值与第二矩阵中对应像素点的第二像素值相乘,得到第四矩阵;
利用最大像素值分别与第一矩阵中每个像素点的第一像素值相减,得到第五矩阵;
将第四矩阵和第五矩阵相加,得到第六矩阵,其中,第六矩阵中每个像素点的像素值为像素点的目标像素值。
在一些实施例中,图像处理装置还包括:
诊断模块,用于将目标图像输入到诊断模型中进行学习,对目标图像中的病变区域进行诊断以获取诊断结果。
在一些实施例中,图像处理装置还包括:
第二获取模块,用于获取已往诊断过患者的图像作为对构建的初始神经网络进行训练的样本图像;
第三获取模块,用于获取样本图像的标注数据,标注数据包括病变区域在样本图像中的位置;
训练模块,用于将样本图像和标注数据输入到神经网络以对其进行训练,形成具有获取病变区域在第一图像中的位置信息的功能的神经网络。
在一些实施例中,图像处理装置还包括:
预处理模块,用于随机抽出预设比例的样本图像进行毛发补充处理;和或,
随机抽出预设比例的样本图像进行颜色增强处理。
需要说明的是,前述对图像处理方法实施例的解释说明,也适用于本实施例的图像处理装置,在此不再赘述。
本公开至少一个实施例提供的图像处理装置,通过对待诊断患者需要诊断的皮肤部位进行扫描,形成用于诊断的第一图像,将第一图像输入到神经网络中进行学习,获取病变区域在第一图像中的位置信息,从第一图像中获取从包括病变区域的原始图和掩码图,将掩码图和原始图进行融合,得到病变区域对应的目标图像,其中,目标图像为用于对病变区域进行诊断的图像,目标图像中的像素点与原始图和掩码图中的像素点一一对应。通过将原始图和掩码图进行融合,可将原始图像中的病变区域与非病变区域区分开,获得仅包含病变区域的图像,实现了从拍摄图像中准确地定位和提取出病变区域,降低了人工成本。
本公开一些实施例还提出了一种计算机设备,包括处理器;存储器;和存储在所述存储器中的计算机程序指令,在所述计算机程序指令被所述处理器运行时,使得所述处理器执行本公开至少一个实施例所提供的图像处理方法的一个或多个步骤。
本公开一些实施例还提出一种非临时性计算机可读存储介质,其上存储有计算机程序指令,在所述计算机程序指令被所述处理器运行时,使得所述处理器执行本公开至少一个实施例所提供的图像处理方法的一个或多个步骤。
下面参考图9,其示出了适于用来实现本公开实施例的计算机设备800的一个具体实现结构。图9示出的计算机设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,计算机系统800包括一个或多个处理器801,其可以根据存储器802中存储的程序指令执行各种操作(例如,程序指令存储在只读存储器或磁盘等存储器802并加载到随机访问存储器(RAM)中)。在存储器802中,还存储有计算机系统800操作所需的各种程序和数据。处理器801、存储器802通过总线803彼此相连。输入/输出(I/O)接口804也连接至总线803。
可以有多种部件连接至I/O接口804以实现信息的输出和输出。例如包括键盘、鼠标等的输入装置805;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出装置806;包括诸如LAN卡、调制解调器等的网络接口卡的通信装置807。通信装置807经由诸如因特网的网络执行通信处理。驱动器808也根据需要连接至I/O接口803。可拆卸介质809,诸如磁盘、光盘、闪存等根据需要连接或安装在驱动器808。
其中,处理器801可以是中央处理器(CPU)或者现场可编程逻辑阵列(FPGA)或者单 片机(MCU)或者数字信号处理器(DSP)或者专用集成电路(ASIC)等具有数据处理能力和/或程序执行能力的逻辑运算器件。
其中,总线803可以是Front Side Bus(FSB),QuickPath Interconnect(QPI),direct media interface(DMI),Peripheral Component Interconnect(PCI)、Peripheral Component Interconnect Express(PCI-E)、HyperTransport(HT)等。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行本公开至少一个实施例所述图像处理方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置807从网络上被下载和安装,和/或从可拆卸介质809被安装。在该计算机程序被处理器801执行时,执行本公开的系统中限定的上述功能。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播 或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (17)

  1. 一种图像处理方法,包括:
    获取第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像;
    将所述第一图像输入到神经网络中,获取病变区域在所述第一图像中的位置信息;
    获取第一图像中病变区域的边界,从所述第一图像中获取包括所述病变区域的原始图和掩码图;和
    将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像;其中,所述目标图像为用于对所述病变区域进行诊断的图像,所述目标图像中的像素点与所述原始图和所述掩码图中的像素点一一对应。
  2. 根据权利要求1所述的图像处理方法,其中所述将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像,包括:
    获取所述掩码图的每个像素点的第一像素值和所述原始图中每个像素点的第二像素值;
    将所述掩码图中的每个像素点的所述第一像素值,与所述原始图中对应像素点的所述第二像素值进行融合,获取到每个像素点的目标像素值;和
    根据所有像素点的所述目标像素值,形成所述病变区域的所述目标图像。
  3. 根据权利要求2所述的图像处理方法,其中所述将所述掩码图中的每个像素点的所述第一像素值,与所述原始图中对应像素点的所述第二像素值进行融合,获取到每个像素点的目标像素值,包括:
    利用所述掩码图中的每个像素点的所述第一像素值与最大像素值做比值,将所述比值与所述原始图中对应像素点的所述第二像素值相乘,得到第一数据;
    获取所述最大像素值与所述第一像素值的差值;和
    将所述第一数据与所述差值相加,得到每个像素点的所述目标像素值。
  4. 根据权利要求2所述的图像处理方法,其中所述获取所述掩码图的每个像素点的第一像素值和所述原始图中每个像素点的第二像素值,包括:
    提取所述掩码图中每个像素点的所述第一像素值,利用每个像素点的所述第一像素值,构成所述掩码图的第一矩阵,其中,每个像素点的所述第一像素值在所述第一矩阵中的位 置是由所述像素点在所述掩码图中的位置确定的,和
    提取所述原始图中每个像素点的第二像素值,利用每个像素点的所述第二像素值,构成所述原始图的第二矩阵,其中,每个像素点的所述第二像素值在所述第二矩阵中的位置是由所述像素点在所述原始图中的位置确定的;
    所述将所述掩码图中的每个像素点的所述第一像素值,与所述原始图中对应像素点的所述第二像素值进行融合,获取到每个像素点的目标像素值,包括:
    将所述第一矩阵与最大像素值相除,得到第三矩阵,
    将所述第三矩阵中每个像素点的像素值与所述第二矩阵中对应像素点的所述第二像素值相乘,得到第四矩阵,
    利用所述最大像素值分别与所述第一矩阵中每个像素点的所述第一像素值相减,得到第五矩阵,和
    将所述第四矩阵和所述第五矩阵相加,得到第六矩阵,其中,所述第六矩阵中每个像素点的像素值为每个像素点的所述目标像素值。
  5. 根据权利要求1-4任一项所述的图像处理方法,其中,在所述将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像之后,所述方法还包括:
    对所述目标图像中的所述病变区域进行诊断以获取诊断结果。
  6. 根据权利要求1-4任一项所述的图像处理方法,其中,在获取所述第一图像之前,所述方法还包括:
    获取样本图像;
    获取所述样本图像的标注数据,所述标注数据包括所述病变区域在所述样本图像中的位置信息;和
    通过所述样本图像和标注数据,训练神经网络,形成具有所需功能的神经网络。
  7. 根据权利要求6所述的图像处理方法,其中在通过所述样本图像和标注数据,训练神经网络,形成具有所需功能的神经网络之前,所述方法还包括:
    随机抽出所选比例的样本图像进行毛发补充处理;和/或,
    随机抽出所选比例的样本图像进行颜色增强处理。
  8. 如权利要求1-7中任一项所述的图像处理方法,其中所述位置信息包括所述病变区域的中心坐标和半径值。
  9. 如权利要求1-7中任一项所述的图像处理方法,其中所述位置信息包括所述病变区域的中心坐标、长轴半径值和短轴半径值。
  10. 如权利要求1-7中任一项所述的图像处理方法,在将第一图像输入到神经网络中来获取病变区域在第一图像中的位置信息之前还包括对第一图像进行预处理。
  11. 如权利要求10所述的图像处理方法,对第一图像进行预处理包括:
    获取第一图像的尺寸;
    将第一图像的尺寸与所述神经网络的输入层的图像分辨率参数进行对比,来确定第一图像的尺寸是否大于所述输入层的图像分辨率参数;
    响应于确定第一图像的尺寸大于所述输入层的图像分辨率参数,对第一图像进行裁剪或者缩小;和
    响应于确定第一图像的尺寸小于输入层的图像分辨率参数,将第一图像进行放大。
  12. 如权利要求1-11中任一项所述的图像处理方法,其中,获取第一图像中病变区域的边界以从所述第一图像中获取包括所述病变区域的原始图和掩码图的步骤基于所述位置信息和图像边缘检测算法。
  13. 如权利要求1-11中任一项所述的图像处理方法,其中,获取第一图像包括对诊断患者需要诊断的皮肤部位进行扫描来形成第一图像。
  14. 如权利要求1-11中任一项所述的图像处理方法,其中,将所述掩码图和所述原始图进行融合包括将掩码图和原始图进行位与操作。
  15. 一种图像处理装置,包括:
    第一获取模块,用于获取待诊断的第一图像,所述第一图像包括对待诊断患者需要诊断的皮肤部位形成的图像;
    机器学习模块,用于将所述第一图像输入到神经网络中,获取病变区域在所述第一图像中的位置信息;
    提取模块,用于获取第一图像中病变区域的边界,从所述第一图像中获取包括所述病变区域的原始图和掩码图;
    融合模块,用于将所述掩码图和所述原始图进行融合,得到所述病变区域对应的目标图像;其中,所述目标图像为用于对所述病变区域进行诊断的图像,所述目标图像中的像素点与所述原始图和所述掩码图中的像素点一一对应。
  16. 一种计算机设备,包括处理器;存储器;和存储在所述存储器中的计算机程序指令,在所述计算机程序指令被所述处理器运行时,使得所述处理器执行如权利要求1-14中任一所述的图像处理方法。
  17. 一种非临时性计算机可读存储介质,其上存储有计算机程序,在所述计算机程序指令被所述处理器运行时,使得所述处理器执行如权利要求1-14中任一所述的图像处理方法。
PCT/CN2018/101242 2017-08-23 2018-08-20 图像处理方法及装置 WO2019037676A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/619,771 US11170482B2 (en) 2017-08-23 2018-08-20 Image processing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710730705.2A CN107464230B (zh) 2017-08-23 2017-08-23 图像处理方法及装置
CN201710730705.2 2017-08-23

Publications (1)

Publication Number Publication Date
WO2019037676A1 true WO2019037676A1 (zh) 2019-02-28

Family

ID=60550333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/101242 WO2019037676A1 (zh) 2017-08-23 2018-08-20 图像处理方法及装置

Country Status (3)

Country Link
US (1) US11170482B2 (zh)
CN (1) CN107464230B (zh)
WO (1) WO2019037676A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660011A (zh) * 2019-09-29 2020-01-07 厦门美图之家科技有限公司 图像处理方法和装置、电子设备及存储介质

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107464230B (zh) 2017-08-23 2020-05-08 京东方科技集团股份有限公司 图像处理方法及装置
CN109754402B (zh) 2018-03-15 2021-11-19 京东方科技集团股份有限公司 图像处理方法、图像处理装置以及存储介质
CN108389172B (zh) * 2018-03-21 2020-12-18 百度在线网络技术(北京)有限公司 用于生成信息的方法和装置
CN109064397B (zh) * 2018-07-04 2023-08-01 广州希脉创新科技有限公司 一种基于摄像耳机的图像拼接方法及系统
CN108985302A (zh) * 2018-07-13 2018-12-11 东软集团股份有限公司 一种皮肤镜图像处理方法、装置及设备
CN109247914A (zh) * 2018-08-29 2019-01-22 百度在线网络技术(北京)有限公司 病症数据获取方法和装置
CN110338835B (zh) * 2019-07-02 2023-04-18 深圳安科高技术股份有限公司 一种智能扫描立体监测方法及系统
CN110764090A (zh) * 2019-10-22 2020-02-07 上海眼控科技股份有限公司 图像处理方法、装置、计算机设备及可读存储介质
CN110942456B (zh) * 2019-11-25 2024-01-23 深圳前海微众银行股份有限公司 篡改图像检测方法、装置、设备及存储介质
JP6854554B1 (ja) * 2020-06-11 2021-04-07 Pst株式会社 情報処理装置、情報処理方法、情報処理システム、及び情報処理プログラム
CN111753847B (zh) * 2020-06-28 2023-04-18 浙江大华技术股份有限公司 图像预处理方法及装置、存储介质、电子装置
CN112288723B (zh) * 2020-10-30 2023-05-23 北京市商汤科技开发有限公司 缺陷检测方法、装置、计算机设备及存储介质
CN112365515A (zh) * 2020-10-30 2021-02-12 深圳点猫科技有限公司 一种基于密集感知网络的边缘检测方法、装置及设备
CN112668573B (zh) * 2020-12-25 2022-05-10 平安科技(深圳)有限公司 目标检测定位置信度确定方法、装置、电子设备及存储介质
CN112862685B (zh) * 2021-02-09 2024-02-23 北京迈格威科技有限公司 图像拼接的处理方法、装置和电子系统
CN113222874B (zh) * 2021-06-01 2024-02-02 平安科技(深圳)有限公司 应用于目标检测的数据增强方法、装置、设备及存储介质
CN114298937B (zh) * 2021-12-29 2024-05-17 深圳软牛科技集团股份有限公司 一种jpeg照片的修复方法、装置及相关组件
CN114549570B (zh) * 2022-03-10 2022-10-18 中国科学院空天信息创新研究院 光学影像与sar影像的融合方法及装置
CN114722925B (zh) * 2022-03-22 2022-11-15 北京安德医智科技有限公司 病灶分类装置与非易失性计算机可读存储介质
CN117036878B (zh) * 2023-07-19 2024-03-26 北京透彻未来科技有限公司 一种人工智能预测图像与数字病理图像融合的方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217437A (zh) * 2014-09-19 2014-12-17 西安电子科技大学 前列腺kvct图像的病变区域分割方法
CN104992430A (zh) * 2015-04-14 2015-10-21 杭州奥视图像技术有限公司 基于卷积神经网络的全自动的三维肝脏分割方法
CN106056596A (zh) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 基于局部先验信息和凸优化的全自动三维肝脏分割方法
CN106203432A (zh) * 2016-07-14 2016-12-07 杭州健培科技有限公司 一种基于卷积神经网显著性图谱的感兴趣区域的定位方法
CN107464230A (zh) * 2017-08-23 2017-12-12 京东方科技集团股份有限公司 图像处理方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107106020A (zh) * 2014-10-29 2017-08-29 组织分析股份有限公司 用于分析并且传输与哺乳动物皮损病有关的数据、图像和视频的系统与方法
US20170017841A1 (en) * 2015-07-17 2017-01-19 Nokia Technologies Oy Method and apparatus for facilitating improved biometric recognition using iris segmentation
CN106611402B (zh) * 2015-10-23 2019-06-14 腾讯科技(深圳)有限公司 图像处理方法及装置
US9684967B2 (en) * 2015-10-23 2017-06-20 International Business Machines Corporation Imaging segmentation using multi-scale machine learning approach
CN105894470A (zh) * 2016-03-31 2016-08-24 北京奇艺世纪科技有限公司 一种图像处理方法及装置
CN106447721B (zh) * 2016-09-12 2021-08-10 北京旷视科技有限公司 图像阴影检测方法和装置
US10531825B2 (en) * 2016-10-14 2020-01-14 Stoecker & Associates, LLC Thresholding methods for lesion segmentation in dermoscopy images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217437A (zh) * 2014-09-19 2014-12-17 西安电子科技大学 前列腺kvct图像的病变区域分割方法
CN104992430A (zh) * 2015-04-14 2015-10-21 杭州奥视图像技术有限公司 基于卷积神经网络的全自动的三维肝脏分割方法
CN106056596A (zh) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 基于局部先验信息和凸优化的全自动三维肝脏分割方法
CN106203432A (zh) * 2016-07-14 2016-12-07 杭州健培科技有限公司 一种基于卷积神经网显著性图谱的感兴趣区域的定位方法
CN107464230A (zh) * 2017-08-23 2017-12-12 京东方科技集团股份有限公司 图像处理方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660011A (zh) * 2019-09-29 2020-01-07 厦门美图之家科技有限公司 图像处理方法和装置、电子设备及存储介质
CN110660011B (zh) * 2019-09-29 2022-11-01 厦门美图之家科技有限公司 图像处理方法和装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN107464230B (zh) 2020-05-08
US20200143526A1 (en) 2020-05-07
CN107464230A (zh) 2017-12-12
US11170482B2 (en) 2021-11-09

Similar Documents

Publication Publication Date Title
WO2019037676A1 (zh) 图像处理方法及装置
US10810735B2 (en) Method and apparatus for analyzing medical image
CN110232383B (zh) 一种基于深度学习模型的病灶图像识别方法及病灶图像识别系统
US11900647B2 (en) Image classification method, apparatus, and device, storage medium, and medical electronic device
EP3979892A1 (en) Systems and methods for processing colon images and videos
US20220157047A1 (en) Feature Point Detection
JP4911029B2 (ja) 異常陰影候補検出方法、異常陰影候補検出装置
CN113573654A (zh) 用于检测并测定病灶尺寸的ai系统
EP3998579B1 (en) Medical image processing method, apparatus and device, medium and endoscope
WO2020259453A1 (zh) 3d图像的分类方法、装置、设备及存储介质
CN113470029B (zh) 训练方法及装置、图像处理方法、电子设备和存储介质
WO2021189848A1 (zh) 模型训练方法、杯盘比确定方法、装置、设备及存储介质
JP2013051988A (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
CN114332132A (zh) 图像分割方法、装置和计算机设备
KR20190090986A (ko) 흉부 의료 영상 판독 지원 시스템 및 방법
WO2020215485A1 (zh) 胎儿生长参数测量方法、系统及超声设备
Kirkerød et al. Unsupervised preprocessing to improve generalisation for medical image classification
US20230237657A1 (en) Information processing device, information processing method, program, model generating method, and training data generating method
JP6810212B2 (ja) 画像識別方法及び画像識別装置
WO2021097595A1 (zh) 图像的病变区域分割方法、装置及服务器
JP2022179433A (ja) 画像処理装置及び画像処理方法
US20220245797A1 (en) Information processing apparatus, information processing method, and information processing program
CN110675444B (zh) 头部ct扫描区域的确定方法、装置及图像处理设备
CN113962958A (zh) 一种征象检测方法及装置
JP2020080913A (ja) 非造影ct画像からの3次元メディアルアクシスモデルに基づく関心臓器画像自動セグメンテーション装置、及び自動セグメンテーション方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18849220

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.08.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18849220

Country of ref document: EP

Kind code of ref document: A1