WO2021189912A1 - Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage - Google Patents

Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2021189912A1
WO2021189912A1 PCT/CN2020/131992 CN2020131992W WO2021189912A1 WO 2021189912 A1 WO2021189912 A1 WO 2021189912A1 CN 2020131992 W CN2020131992 W CN 2020131992W WO 2021189912 A1 WO2021189912 A1 WO 2021189912A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
standard
training image
feature map
Prior art date
Application number
PCT/CN2020/131992
Other languages
English (en)
Chinese (zh)
Inventor
刁勍琛
伍世宾
黄凌云
刘玉宇
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021189912A1 publication Critical patent/WO2021189912A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • This application relates to the field of image analysis technology, and in particular to a method, device, electronic equipment, and computer-readable storage medium for detecting a target in an image.
  • a method for detecting a target in an image provided by this application includes:
  • the target detection model Performing target detection on the standard training image by using the target detection model to obtain a detection result, wherein the detection result includes predicted center point information, predicted size information, and predicted boundary information of the target;
  • An image of the target to be detected is acquired, and the standard target detection model is used to perform image detection on the image of the target to be detected to obtain a standard detection result.
  • the present application also provides a device for detecting a target in an image, the device including:
  • the image noise reduction module is used to obtain training images, perform noise reduction processing on the training images to obtain standard training images, wherein the training images include standard center point information, standard size information, and standard boundary information of the target;
  • Model building module used to build a target detection model
  • the target detection module is used to perform target detection on the standard training image by using the target detection model to obtain a detection result, wherein the detection result includes the predicted center point information, predicted size information, and predicted boundary of the target information;
  • a loss function construction module configured to construct a target loss function according to the detection result and the standard training image
  • a model optimization module is used to calculate the loss value of the target loss function, and optimize the target detection model according to the loss value to obtain a standard target detection model;
  • the standard detection module is used to obtain an image of a target object to be detected, and use the standard target object detection model to perform image detection on the image of the target object to be detected to obtain a standard detection result.
  • This application also provides an electronic device, which includes:
  • At least one processor and,
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor, so that the at least one processor can execute the following steps:
  • the target detection model Performing target detection on the standard training image by using the target detection model to obtain a detection result, wherein the detection result includes predicted center point information, predicted size information, and predicted boundary information of the target;
  • An image of the target to be detected is acquired, and the standard target detection model is used to perform image detection on the image of the target to be detected to obtain a standard detection result.
  • the present application also provides a computer-readable storage medium, including a storage data area and a storage program area, wherein the storage data area stores created data, and the storage program area stores a computer program; wherein, When the computer program is executed by the processor, the following steps are implemented:
  • the target detection model Performing target detection on the standard training image by using the target detection model to obtain a detection result, wherein the detection result includes predicted center point information, predicted size information, and predicted boundary information of the target;
  • An image of the target to be detected is acquired, and the standard target detection model is used to perform image detection on the image of the target to be detected to obtain a standard detection result.
  • FIG. 1 is a schematic flowchart of a method for detecting a target in an image provided by an embodiment of the application
  • FIG. 2 is a schematic diagram of a flow of noise reduction processing on training images provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of a process of using a target detection model to perform target detection on a standard training image according to an embodiment of the application;
  • FIG. 4 is a schematic diagram of modules of a detection device for a target in an image provided by an embodiment of the application;
  • FIG. 5 is a schematic diagram of the internal structure of an electronic device that implements a method for detecting a target object in an image provided by an embodiment of the application;
  • the execution subject of the method for detecting a target in an image provided by the embodiment of the application includes but is not limited to at least one of the electronic devices that can be configured to execute the method provided by the embodiment of the application, such as a server and a terminal.
  • the detection method of the target in the image can be executed by software or hardware installed in the terminal device or the server device, and the software can be a blockchain platform.
  • the server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, etc.
  • This application provides a method for detecting a target in an image.
  • FIG. 1 it is a schematic flowchart of a method for detecting a target in an image provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the method for detecting the target in the image includes:
  • a java sentence with a data capture function is used to obtain training images from a blockchain node for storing training images, and the high data throughput of the blockchain node can improve the efficiency of obtaining training images.
  • the training image is an image including a target object, and the training image also includes standard center point information, standard size information, and standard boundary information of the target object.
  • the training image is a histopathological image containing a target lesion
  • the histopathological image includes standard center point information of the target lesion, standard size information of the target lesion, and standard boundary information of the target lesion.
  • FIG. 2 is a schematic diagram of a flow of noise reduction processing on training images provided by an embodiment of the application
  • the denoising processing on the training image to obtain a standard training image includes:
  • the first value is 1, and the second value is 0.
  • the calculating the average value of pixels in the preset neighborhood of the target pixel point includes:
  • the following average value calculation formula is used to calculate the average value of pixels in the preset neighborhood:
  • f(j,k) is the pixel point (j,k) in the preset neighborhood
  • g(x,y) is the average value of the pixel
  • W is the preset neighborhood
  • j,k is the pixel in the preset neighborhood
  • med is the mean value processing operation
  • N is the number of pixels in the preset neighborhood.
  • the embodiment of the application performs noise reduction processing on the training image to obtain a standard training image, which can reduce noise in the training image, highlight the target in the training image, and improve the accuracy of the model trained using the standard training image.
  • the target detection model includes a plurality of parallel convolution channels with different resolutions.
  • the target detection model adopts the HRnet network structure, and the HRnet network uses multi-channel, multi-resolution branch parallel convolution to convolve the same feature, so as to obtain the same feature of the target with different resolutions.
  • the HRnet network used in the embodiments of the present application changes from a traditional serial connection convolution to a parallel connection convolution, thereby obtaining rich high-resolution representations and improving the accuracy of target detection by the model.
  • FIG. 3 is a schematic diagram of a process of using a target detection model to perform target detection on a standard training image according to an embodiment of the application.
  • the use of the target detection model to perform target detection on the standard training image to obtain a detection result includes:
  • the image segmentation algorithm includes, but is not limited to, a region-based image segmentation algorithm, a threshold-based image segmentation algorithm, and an edge-based image segmentation algorithm.
  • the forward parallel convolution channel and the backward parallel convolution channel are relative terms.
  • the target detection model includes 4 parallel convolution channels, and the previous one is a parallel convolution that performs convolution on a standard training image.
  • the parallel convolution channel that convolves the standard training image is the forward parallel convolution channel; the latter parallel convolution channel that convolves the standard training image is relative to the previous one that convolves the standard training image.
  • the parallel convolution channel of is referred to as the backward parallel convolution channel.
  • the forward parallel convolution channel is the initial parallel convolution channel
  • the backward convolution convolves the result obtained in the forward direction and the input of the forward parallel convolution channel to obtain a feature map.
  • the backward convolution performs convolution on the result obtained in the forward direction and the input of all forward convolution channels to obtain a feature map.
  • the standard training image is convolved in the first parallel convolution channel to obtain the first feature map
  • the first parallel convolution channel, the first parallel convolution channel, the first parallel convolution channel, and the first parallel convolution channel are connected in parallel to obtain four feature maps of the same feature with different resolutions.
  • the resolution of the feature map obtained by the multi-layer parallel convolution channel output by the target detection model is gradually reduced, and the feature information is gradually enhanced. Therefore, the feature map obtained through the multi-layer parallel convolution channel in the embodiment of the present application contains both high-resolution position information and low-resolution feature information, which is more conducive to the subsequent use of the feature map for target detection and improves The accuracy of the target detection model.
  • the target loss function includes: a center point loss function, a size loss function, and a boundary loss function.
  • the target loss function is:
  • L det is the center point loss function
  • L size is the size loss function
  • L bce is the boundary loss function
  • C is the number of target categories
  • H is the length of the standard training image
  • W is the standard training image
  • N is the number of the standard training images
  • ⁇ and ⁇ are preset constants
  • p cij is the prediction center point information
  • y cij is the standard center point information
  • s k is the prediction size information
  • p ij is prediction boundary information
  • y ij is standard boundary information.
  • the embodiment of the application uses a combination of a center point loss function, a size loss function, and a boundary loss function as the target loss function. At the same time, three loss values of the center point position, size, and boundary position of the target are used, and the three loss values are used to target the target. The parameters of the object detection model are updated, which is beneficial to improve the accuracy of the target object detection model.
  • an optimization algorithm is used to optimize the parameters of the target detection model
  • the Adam optimization algorithm when the loss value of the target loss function is greater than the preset loss threshold, the Adam optimization algorithm is used to optimize the parameters of the target detection model.
  • the Adam optimization algorithm can adaptively adjust the target detection model training process
  • the learning rate makes the target detection model more accurate and improves the performance of the target detection model.
  • the image of the target object to be detected includes a medical image of a biological tissue.
  • the image of the lesion to be detected can be uploaded by the user through a user-side program.
  • the image of the lesion to be detected is obtained, the image of the lesion to be detected is input to a standard lesion detection model for lesion detection, and a standard detection result is obtained.
  • the embodiment of the application improves the quality of the training image by denoising the training image, and further improves the accuracy of the target detection model trained by the training image; the target detection model is calculated separately by constructing the target loss function to output the target.
  • the three loss values of predicted center point information, predicted size information and predicted boundary information are used to update the parameters of the target detection model by using the three loss values to improve the size, size and position of the target output by the target detection model.
  • the accuracy of the target object to be detected is acquired, and the standard target object detection model is used to perform image detection on the image of the target object to be detected, without manual image analysis, which improves the detection efficiency of the target object in the image. Therefore, the method for detecting a target in an image proposed in this application can improve the efficiency and accuracy of detecting the target in the image.
  • FIG. 4 it is a schematic diagram of the module of the detection device for the target in the image of the present application.
  • the device 100 for detecting a target in an image described in this application can be installed in an electronic device.
  • the detection device of the target in the image may include the image noise reduction module 101, the model construction module 102, the target detection module 103, the loss function construction module 104, the model optimization module 105, and the standard detection module 106.
  • the module described in the present invention can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of an electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the image noise reduction module 101 is used to obtain training images, perform noise reduction processing on the training images, and obtain standard training images, where the training images include standard center point information, standard size information, and standard boundaries of the target object information;
  • the model building module 102 is used to build a target detection model
  • the target detection module 103 is configured to use the target detection model to perform target detection on the standard training image to obtain a detection result, where the detection result includes predicted center point information and predicted size information of the target And predict boundary information;
  • the loss function construction module 104 is configured to construct a target loss function according to the detection result and the standard training image
  • the model optimization module 105 is configured to calculate the loss value of the target loss function, and optimize the target detection model according to the loss value to obtain a standard target detection model;
  • the standard detection module 106 is configured to obtain an image of a target object to be detected, and use the standard target object detection model to perform image detection on the image of the target object to be detected to obtain a standard detection result.
  • each module of the device for detecting the target object in the image is as follows:
  • the image noise reduction module 101 is used to obtain training images, perform noise reduction processing on the training images, and obtain standard training images, where the training images include standard center point information, standard size information, and standard boundaries of the target object information.
  • a java sentence with a data capture function is used to obtain training images from a blockchain node for storing training images, and the high data throughput of the blockchain node can improve the efficiency of obtaining training images.
  • the training image is an image including a target object, and the training image also includes standard center point information, standard size information, and standard boundary information of the target object.
  • the training image is a histopathological image containing a target lesion
  • the histopathological image includes standard center point information of the target lesion, standard size information of the target lesion, and standard boundary information of the target lesion.
  • the image noise reduction module 101 is specifically used for:
  • the first value is 1, and the second value is 0.
  • the calculating the average value of pixels in the preset neighborhood of the target pixel point includes:
  • the following average value calculation formula is used to calculate the average value of pixels in the preset neighborhood:
  • f(j,k) is the pixel point (j,k) in the preset neighborhood
  • g(x,y) is the average value of the pixel
  • W is the preset neighborhood
  • j,k is the pixel in the preset neighborhood
  • med is the mean value processing operation
  • N is the number of pixels in the preset neighborhood.
  • the embodiment of the application performs noise reduction processing on the training image to obtain a standard training image, which can reduce noise in the training image, highlight the target in the training image, and improve the accuracy of the model trained using the standard training image.
  • the model construction module 102 is used to construct a target detection model.
  • the target detection model includes a plurality of parallel convolution channels with different resolutions.
  • the target detection model adopts the HRnet network structure, and the HRnet network uses multi-channel, multi-resolution branch parallel convolution to convolve the same feature, so as to obtain the same feature of the target with different resolutions.
  • the HRnet network used in the embodiments of the present application changes from a traditional serial connection convolution to a parallel connection convolution, thereby obtaining rich high-resolution representations and improving the accuracy of target detection by the model.
  • the target detection module 103 is configured to use the target detection model to perform target detection on the standard training image to obtain a detection result, where the detection result includes predicted center point information and predicted size information of the target And predict boundary information.
  • the target detection module 103 is specifically configured to:
  • Image segmentation is performed on the fusion feature map by using an image segmentation algorithm to obtain the detection result.
  • the image segmentation algorithm includes, but is not limited to, a region-based image segmentation algorithm, a threshold-based image segmentation algorithm, and an edge-based image segmentation algorithm.
  • the forward parallel convolution channel and the backward parallel convolution channel are relative terms.
  • the target detection model includes 4 parallel convolution channels, and the previous one is a parallel convolution that performs convolution on a standard training image.
  • the parallel convolution channel that convolves the standard training image is the forward parallel convolution channel; the latter parallel convolution channel that convolves the standard training image is relative to the previous one that convolves the standard training image.
  • the parallel convolution channel of is referred to as the backward parallel convolution channel.
  • the forward parallel convolution channel is the initial parallel convolution channel
  • the backward convolution convolves the result obtained in the forward direction and the input of the forward parallel convolution channel to obtain a feature map.
  • the backward convolution performs convolution on the result obtained in the forward direction and the input of all forward convolution channels to obtain a feature map.
  • the standard training image is convolved in the first parallel convolution channel to obtain the first feature map
  • the first parallel convolution channel, the first parallel convolution channel, the first parallel convolution channel, and the first parallel convolution channel are connected in parallel to obtain four feature maps of the same feature with different resolutions.
  • the resolution of the feature map obtained by the multi-layer parallel convolution channel output by the target detection model is gradually reduced, and the feature information is gradually enhanced. Therefore, the feature map obtained through the multi-layer parallel convolution channel in the embodiment of the present application contains both high-resolution position information and low-resolution feature information, which is more conducive to the subsequent use of feature maps for target detection and improves The accuracy of the target detection model.
  • the loss function construction module 104 is configured to construct a target loss function according to the detection result and the standard training image.
  • the target loss function includes: a center point loss function, a size loss function, and a boundary loss function.
  • the target loss function is:
  • L det is the center point loss function
  • L size is the size loss function
  • L bce is the boundary loss function
  • C is the number of target categories
  • H is the length of the standard training image
  • W is the standard training image
  • N is the number of the standard training images
  • ⁇ and ⁇ are preset constants
  • p cij is the prediction center point information
  • y cij is the standard center point information
  • s k is the prediction size information
  • p ij is prediction boundary information
  • y ij is standard boundary information.
  • the embodiment of the application uses a combination of a center point loss function, a size loss function, and a boundary loss function as the target loss function. At the same time, three loss values of the center point position, size, and boundary position of the target are used, and the three loss values are used to target the target. The parameters of the object detection model are updated, which is beneficial to improve the accuracy of the target object detection model.
  • the model optimization module 105 is configured to calculate the loss value of the target loss function, and optimize the target detection model according to the loss value to obtain a standard target detection model.
  • the model optimization module 105 is specifically used for:
  • an optimization algorithm is used to optimize the parameters of the target detection model
  • the Adam optimization algorithm when the loss value of the target loss function is greater than the preset loss threshold, the Adam optimization algorithm is used to optimize the parameters of the target detection model.
  • the Adam optimization algorithm can adaptively adjust the target detection model training process
  • the learning rate makes the target detection model more accurate and improves the performance of the target detection model.
  • the standard detection module 106 is configured to obtain an image of a target object to be detected, and use the standard target object detection model to perform image detection on the image of the target object to be detected to obtain a standard detection result.
  • the image of the target object to be detected includes a medical image of a biological tissue.
  • the image of the lesion to be detected can be uploaded by the user through a user-side program.
  • the image of the lesion to be detected is obtained, the image of the lesion to be detected is input to the standard lesion detection model for lesion detection, and the standard detection result is obtained.
  • the embodiment of the application improves the quality of the training image by denoising the training image, and further improves the accuracy of the target detection model trained by the training image; the target detection model is calculated separately by constructing the target loss function to output the target.
  • the three loss values of predicted center point information, predicted size information and predicted boundary information are used to update the parameters of the target detection model by using the three loss values to improve the size, size and position of the target output by the target detection model.
  • the accuracy of the target object to be detected is acquired, and the standard target object detection model is used to perform image detection on the image of the target object to be detected, without manual image analysis, which improves the detection efficiency of the target object in the image. Therefore, the device for detecting a target in an image proposed in this application can improve the efficiency and accuracy of detecting the target in the image.
  • FIG. 5 it is a schematic structural diagram of an electronic device that implements the method for detecting a target object in an image according to the present application.
  • the electronic device 1 may include a processor 10, a memory 11, and a bus, and may also include a computer program stored in the memory 11 and running on the processor 10, such as a target detection program 12 in an image.
  • the memory 11 includes at least one type of readable storage medium, and the memory 11 may be volatile or non-volatile.
  • the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, for example, a mobile hard disk of the electronic device 1. In other embodiments, the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart media card (SMC), and a secure digital (Secure Digital) equipped on the electronic device 1. , SD) card, flash card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the electronic device 1, such as the code of the target detection program 12 in the image, etc., but also to temporarily store data that has been output or will be output.
  • the processor 10 may be composed of integrated circuits in some embodiments, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combinations of central processing unit (CPU), microprocessor, digital processing chip, graphics processor, and various control chips, etc.
  • the processor 10 is the control unit of the electronic device, which uses various interfaces and lines to connect the various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as executing The detection program of the target in the image, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
  • the bus may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to implement connection and communication between the memory 11 and at least one processor 10 and the like.
  • FIG. 5 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 5 does not constitute a limitation on the electronic device 1, and may include fewer or more components than shown in the figure. Components, or a combination of certain components, or different component arrangements.
  • the electronic device 1 may also include a power source (such as a battery) for supplying power to various components.
  • the power source may be logically connected to the at least one processor 10 through a power management device, thereby controlling power
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include any components such as one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, and power status indicators.
  • the electronic device 1 may also include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface.
  • the network interface may include a wired interface and/or a wireless interface (such as a Wi-Fi interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may also include a user interface.
  • the user interface may be a display (Display) and an input unit (such as a keyboard (Keyboard)).
  • the user interface may also be a standard wired interface or a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
  • the detection program 12 of the target object in the image stored in the memory 11 in the electronic device 1 is a combination of multiple computer programs. When running in the processor 10, it can realize:
  • the target detection model Performing target detection on the standard training image by using the target detection model to obtain a detection result, wherein the detection result includes predicted center point information, predicted size information, and predicted boundary information of the target;
  • An image of the target to be detected is acquired, and the standard target detection model is used to perform image detection on the image of the target to be detected to obtain a standard detection result.
  • the integrated module/unit of the electronic device 1 can be stored in a computer-readable storage medium. It can be volatile or non-volatile.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
  • the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function, etc.; the storage data area may store a block chain node Use the created data, etc.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de détection d'un objet cible dans une image, ainsi qu'un dispositif électronique et un support de stockage, qui se rapportent à la technologie de l'analyse d'images. Le procédé consiste à: effectuer un traitement de réduction de bruit sur une image d'apprentissage de façon à obtenir une image d'apprentissage standard, et effectuer une détection d'objet cible sur l'image d'apprentissage standard à l'aide d'un modèle de détection d'objet cible de façon à obtenir des informations de point central prédit, des informations de taille prédite et des informations de limite prédites; construire une fonction de perte cible, calculer une valeur de perte, et optimiser le modèle de détection d'objet cible en fonction de la valeur de perte de façon à obtenir un modèle de détection d'objet cible standard; et acquérir une image d'objet cible à soumettre à une détection, et effectuer une détection d'image sur ladite image d'objet cible à l'aide du modèle de détection d'objet cible standard de façon à obtenir un résultat de détection standard. De plus, le procédé concerne également la technologie des chaînes de blocs et le résultat de détection standard peut être stocké dans un noeud de chaîne de blocs. Le procédé peut être appliqué à la détection d'informations de lésion dans une image médicale. Au moyen du procédé, l'efficacité et la précision de détection d'un objet cible dans une image peuvent être améliorées.
PCT/CN2020/131992 2020-09-25 2020-11-27 Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage WO2021189912A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011023942.3 2020-09-25
CN202011023942.3A CN111932482B (zh) 2020-09-25 2020-09-25 图像中目标物的检测方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021189912A1 true WO2021189912A1 (fr) 2021-09-30

Family

ID=73334774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/131992 WO2021189912A1 (fr) 2020-09-25 2020-11-27 Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN111932482B (fr)
WO (1) WO2021189912A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610069A (zh) * 2021-10-11 2021-11-05 北京文安智能技术股份有限公司 基于知识蒸馏的目标检测模型训练方法
CN114241411A (zh) * 2021-12-15 2022-03-25 平安科技(深圳)有限公司 基于目标检测的计数模型处理方法、装置及计算机设备
CN114758249A (zh) * 2022-06-14 2022-07-15 深圳市优威视讯科技股份有限公司 基于野外夜间环境的目标物监测方法、装置、设备及介质
CN114972303A (zh) * 2022-06-16 2022-08-30 平安科技(深圳)有限公司 图像获取方法、装置、电子设备及存储介质
CN115690853A (zh) * 2022-12-30 2023-02-03 广州蚁窝智能科技有限公司 手势识别方法及电动卫生罩启闭控制系统

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932482B (zh) * 2020-09-25 2021-05-18 平安科技(深圳)有限公司 图像中目标物的检测方法、装置、电子设备及存储介质
CN112581522B (zh) * 2020-11-30 2024-05-07 平安科技(深圳)有限公司 图像中目标物位置检测方法、装置、电子设备及存储介质
CN112465060A (zh) * 2020-12-10 2021-03-09 平安科技(深圳)有限公司 图像中目标物检测方法、装置、电子设备及可读存储介质
CN113160144B (zh) * 2021-03-25 2023-05-26 平安科技(深圳)有限公司 目标物检测方法、装置、电子设备及存储介质
CN113222890B (zh) * 2021-03-30 2023-09-15 平安科技(深圳)有限公司 小目标物检测方法、装置、电子设备及存储介质
CN113159147B (zh) * 2021-04-08 2023-09-26 平安科技(深圳)有限公司 基于神经网络的图像识别方法、装置、电子设备
CN113537070B (zh) * 2021-07-19 2022-11-22 中国第一汽车股份有限公司 一种检测方法、装置、电子设备及存储介质
CN113780291A (zh) * 2021-08-25 2021-12-10 北京达佳互联信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN115984269B (zh) * 2023-03-20 2023-07-14 湖南长理尚洋科技有限公司 一种非侵入式局部水生态安全检测方法与系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665736A (zh) * 2017-09-30 2018-02-06 百度在线网络技术(北京)有限公司 用于生成信息的方法和装置
US20200005460A1 (en) * 2018-06-28 2020-01-02 Shenzhen Imsight Medical Technology Co. Ltd. Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium
CN110880177A (zh) * 2019-11-26 2020-03-13 北京推想科技有限公司 一种图像识别方法及装置
CN111402226A (zh) * 2020-03-13 2020-07-10 浙江工业大学 一种基于级联卷积神经网络的表面疵点检测方法
CN111932482A (zh) * 2020-09-25 2020-11-13 平安科技(深圳)有限公司 图像中目标物的检测方法、装置、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
CN110599451B (zh) * 2019-08-05 2023-01-20 平安科技(深圳)有限公司 医学图像病灶检测定位方法、装置、设备及存储介质
CN110705555B (zh) * 2019-09-17 2022-06-14 中山大学 基于fcn的腹部多器官核磁共振图像分割方法、系统及介质
CN110674866B (zh) * 2019-09-23 2021-05-07 兰州理工大学 迁移学习特征金字塔网络对X-ray乳腺病灶图像检测方法
CN110942446A (zh) * 2019-10-17 2020-03-31 付冲 一种基于ct影像的肺结节自动检测方法
CN111597933B (zh) * 2020-04-30 2023-07-14 合肥的卢深视科技有限公司 人脸识别方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665736A (zh) * 2017-09-30 2018-02-06 百度在线网络技术(北京)有限公司 用于生成信息的方法和装置
US20200005460A1 (en) * 2018-06-28 2020-01-02 Shenzhen Imsight Medical Technology Co. Ltd. Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium
CN110880177A (zh) * 2019-11-26 2020-03-13 北京推想科技有限公司 一种图像识别方法及装置
CN111402226A (zh) * 2020-03-13 2020-07-10 浙江工业大学 一种基于级联卷积神经网络的表面疵点检测方法
CN111932482A (zh) * 2020-09-25 2020-11-13 平安科技(深圳)有限公司 图像中目标物的检测方法、装置、电子设备及存储介质

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610069A (zh) * 2021-10-11 2021-11-05 北京文安智能技术股份有限公司 基于知识蒸馏的目标检测模型训练方法
CN113610069B (zh) * 2021-10-11 2022-02-08 北京文安智能技术股份有限公司 基于知识蒸馏的目标检测模型训练方法
CN114241411A (zh) * 2021-12-15 2022-03-25 平安科技(深圳)有限公司 基于目标检测的计数模型处理方法、装置及计算机设备
CN114241411B (zh) * 2021-12-15 2024-04-09 平安科技(深圳)有限公司 基于目标检测的计数模型处理方法、装置及计算机设备
CN114758249A (zh) * 2022-06-14 2022-07-15 深圳市优威视讯科技股份有限公司 基于野外夜间环境的目标物监测方法、装置、设备及介质
CN114758249B (zh) * 2022-06-14 2022-09-02 深圳市优威视讯科技股份有限公司 基于野外夜间环境的目标物监测方法、装置、设备及介质
CN114972303A (zh) * 2022-06-16 2022-08-30 平安科技(深圳)有限公司 图像获取方法、装置、电子设备及存储介质
CN115690853A (zh) * 2022-12-30 2023-02-03 广州蚁窝智能科技有限公司 手势识别方法及电动卫生罩启闭控制系统
CN115690853B (zh) * 2022-12-30 2023-04-28 广州蚁窝智能科技有限公司 手势识别方法及电动卫生罩启闭控制系统

Also Published As

Publication number Publication date
CN111932482A (zh) 2020-11-13
CN111932482B (zh) 2021-05-18

Similar Documents

Publication Publication Date Title
WO2021189912A1 (fr) Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage
WO2022121156A1 (fr) Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage lisible
WO2021217851A1 (fr) Méthode et appareil de marquage automatique de cellules anormales, dispositif électronique et support d'enregistrement
US10810735B2 (en) Method and apparatus for analyzing medical image
CN111047609B (zh) 肺炎病灶分割方法和装置
WO2021189909A1 (fr) Procédé et appareil de détection et d'analyse de lésion, dispositif électronique et support de stockage informatique
WO2021151338A1 (fr) Procédé d'analyse d'images médicales, appareil, dispositif électronique et support de stockage lisible
CN110059697B (zh) 一种基于深度学习的肺结节自动分割方法
WO2021189901A1 (fr) Procédé et appareil de segmentation d'image, dispositif électronique et support d'informations lisible par ordinateur
WO2022110712A1 (fr) Procédé et appareil d'amélioration d'image, dispositif électronique et support de stockage lisible par ordinateur
WO2020253508A1 (fr) Procédé et appareil de détection de cellule anormale, et support d'informations lisible par ordinateur
WO2021189913A1 (fr) Procédé et appareil de segmentation d'objet cible dans une image, et dispositif électronique et support d'enregistrement
WO2021189855A1 (fr) Procédé et appareil de reconnaissance d'image basés sur une séquence de tdm et dispositif électronique et support
WO2021189910A1 (fr) Procédé et appareil de reconnaissance d'image, dispositif électronique et support d'informations lisible par ordinateur
WO2021189911A1 (fr) Procédé et appareil de détection de position d'objet cible basée sur un flux vidéo, dispositif et support
CN111862096B (zh) 图像分割方法、装置、电子设备及存储介质
WO2021189848A1 (fr) Procédé et appareil d'entrainement de modèles, procédé et appareil de détermination du rapport coupelle-disque et dispositif et support de stockage
WO2021189827A1 (fr) Procédé et appareil de reconnaissance d'image floue et dispositif et support d'informations lisible par ordinateur
WO2020248848A1 (fr) Procédé et dispositif de détermination intelligente de cellule anormale, et support d'informations lisible par ordinateur
WO2022126903A1 (fr) Procédé et dispositif de détection de zone d'anomalie d'image, dispositif électronique et support de stockage
WO2021151307A1 (fr) Procédé intégré, appareil, dispositif et support intégrés basés sur le balayage et l'analyse de sections pathologiques
CN113065609B (zh) 图像分类方法、装置、电子设备及可读存储介质
WO2021189856A1 (fr) Procédé et appareil de vérification de certificat, et dispositif électronique et support
WO2021184576A1 (fr) Procédé et appareil de génération d'image médicale, dispositif électronique, et support
WO2021189914A1 (fr) Dispositif électronique, procédé et appareil de génération d'index d'image médicale, et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926583

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926583

Country of ref document: EP

Kind code of ref document: A1