CN116245835A - Image detection method, device, electronic equipment and storage medium - Google Patents

Image detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116245835A
CN116245835A CN202310149740.0A CN202310149740A CN116245835A CN 116245835 A CN116245835 A CN 116245835A CN 202310149740 A CN202310149740 A CN 202310149740A CN 116245835 A CN116245835 A CN 116245835A
Authority
CN
China
Prior art keywords
target
image
pixel point
feature data
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310149740.0A
Other languages
Chinese (zh)
Other versions
CN116245835B (en
Inventor
吴博烔
郭华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shukun Beijing Network Technology Co Ltd
Original Assignee
Shukun Beijing Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shukun Beijing Network Technology Co Ltd filed Critical Shukun Beijing Network Technology Co Ltd
Priority to CN202310149740.0A priority Critical patent/CN116245835B/en
Publication of CN116245835A publication Critical patent/CN116245835A/en
Application granted granted Critical
Publication of CN116245835B publication Critical patent/CN116245835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

According to the image detection method, the device, the electronic equipment and the storage medium, at least two target images are obtained, after target candidate areas in the target images are obtained, reference feature data corresponding to the target candidate areas are obtained according to pixel values of the target candidate areas in the target images, feature data to be identified corresponding to the pixel points are obtained according to pixel values of the pixel points in the target images for each pixel point in each target image, detection areas are obtained according to the reference feature data and the feature data to be identified corresponding to the pixel points, and detection results are obtained. Therefore, according to the reference feature data corresponding to each target candidate region and the feature data to be identified corresponding to each pixel point in each target image, the detection region is determined, each pixel point in the image can be accurately detected, and therefore the detection region of an object with a smaller size is determined, and image detection is rapidly and accurately achieved.

Description

Image detection method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image detection method, an image detection device, an electronic device, and a storage medium.
Background
With the development of image processing technology, image detection is applied to various fields. For example, in the fields of face recognition, medical image recognition, and the like, it is necessary to detect a specific object in an image. When detecting specific objects in an image, because the sizes of the specific objects are different, the sizes of some specific objects in the image are large, and the sizes of some specific objects in the image are small.
Disclosure of Invention
Based on the above-mentioned research, the embodiments of the present invention provide an image detection method, apparatus, electronic device, and readable storage medium, which can detect each pixel point in an image, and accurately detect a detection object with a small size.
Embodiments of the present invention may be implemented by:
in a first aspect, an embodiment of the present invention provides an image detection method, including:
Acquiring at least two target images, and determining a target candidate region in each target image; the positions and the number of the target candidate areas included in each target image are the same;
obtaining reference feature data corresponding to each target candidate region according to the pixel value of each target candidate region in each target image;
aiming at each pixel point in each target image, obtaining feature data to be identified corresponding to the pixel point according to the pixel value of the pixel point in each target image;
determining a detection area according to the reference characteristic data and the characteristic data to be identified corresponding to the pixel points;
and detecting the detection area to obtain a detection result of the detection area.
In an optional embodiment, the step of obtaining the reference feature data corresponding to each target candidate region according to the pixel value of each target candidate region in each target image includes:
acquiring pixel values of the center point of each target candidate region in each target image;
and obtaining the reference characteristic data corresponding to each target candidate region according to the pixel value of the center point of each target candidate region in each target image.
In an optional embodiment, the step of determining the detection area according to each reference feature data and the feature data to be identified corresponding to each pixel point includes:
determining and obtaining a target pixel point according to the reference feature data and the feature data to be identified corresponding to the pixel points;
and constructing the detection area according to the target pixel point.
In an optional embodiment, the step of determining, according to each piece of reference feature data and the feature data to be identified corresponding to each pixel point, to obtain the target pixel point includes:
calculating the similarity of each piece of reference characteristic data and the characteristic data to be identified corresponding to each pixel point according to the characteristic data to be identified corresponding to each pixel point;
detecting whether the similarity of each piece of reference characteristic data and the characteristic data to be identified corresponding to the pixel point accords with a preset threshold value;
if at least one similarity between the reference feature data and the feature data to be identified corresponding to the pixel point accords with the preset threshold, setting the pixel point as a target pixel point.
In an alternative embodiment, the step of constructing a detection area according to the target pixel point includes:
Constructing a preset frame with a first preset size by taking the target pixel point as a center;
detecting whether the pixel points in the preset frame meet preset conditions or not;
if so, constructing a detection area by taking the target pixel point as a center point according to a second preset size.
In an alternative embodiment, the step of determining the target candidate region in each of the target images includes:
inputting each target image into a preset segmentation model, and determining and obtaining a target candidate region in each target image.
In an alternative embodiment, the step of detecting the detection area to obtain a detection result of the detection area includes:
and inputting the detection area into a preset classification model to obtain a detection result of the detection area.
In a second aspect, an embodiment of the present invention provides an image detection apparatus, including:
the acquisition module is used for acquiring at least two target images and determining target candidate areas in the target images; the positions and the number of the target candidate areas included in each target image are the same;
the feature construction module is used for obtaining reference feature data corresponding to each target candidate region according to the pixel value of each target candidate region in each target image; the method is also used for obtaining to-be-identified characteristic data corresponding to each pixel point in each target image according to the pixel value of the pixel point in each target image;
The computing module is used for determining a detection area according to the reference characteristic data and the characteristic data to be identified corresponding to the pixel points;
and the detection module is used for detecting the detection area to obtain a detection result of the detection area.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the image detection method according to any one of the foregoing embodiments when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a storage medium having stored thereon a computer program that, when executed by a processor, implements the image detection method according to any of the foregoing embodiments.
According to the image detection method, the device, the electronic equipment and the storage medium, at least two target images are obtained, after target candidate areas in the target images are determined, reference feature data corresponding to the target candidate areas are obtained according to pixel values of the target candidate areas in the target images, feature data to be identified corresponding to the pixel points are obtained according to pixel values of the pixel points in the target images for each pixel point in each target image, detection areas are determined according to the reference feature data and the feature data to be identified corresponding to the pixel points, and detection results of the detection areas are obtained. In this way, according to the reference feature data corresponding to each target candidate region and the feature data to be identified corresponding to each pixel point in each target image, the detection region is determined, and the detection region is detected, so that each pixel point in the image can be accurately detected, the detection region of an object with a smaller size is determined, and the image detection is rapidly and accurately realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of an image detection method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a reference characteristic curve X and a characteristic curve Y to be identified according to an embodiment of the present invention.
Fig. 4 is a block diagram of an image detection apparatus according to an embodiment of the present invention.
Icon: 100-an electronic device; 10-an image detection device; 11-an acquisition module; 12-a feature construction module; 13-a calculation module; 14-a detection module; 20-memory; 30-a processor; 40-communication unit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the present invention, the term "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "exemplary" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Face recognition, target detection, lesion positioning, tumor detection and the like are all realized by means of image detection, when specific objects in an image are detected, the sizes of the specific objects in the image are different, the sizes of some specific objects in the image are very large, the sizes of some specific objects in the image are very small, the conventional image detection method can only detect specific objects with relatively large sizes, the specific objects with relatively small sizes are difficult to detect, and accurate detection of the specific objects in the image cannot be realized.
Based on the above problems, the image detection method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention acquire at least two target images, determine to obtain target candidate areas in each target image, obtain reference feature data corresponding to each target candidate area according to pixel values of each target candidate area in each target image, obtain to-be-identified feature data corresponding to each pixel point in each target image according to pixel values of the pixel point in each target image, determine to obtain detection areas according to each reference feature data and to-be-identified feature data corresponding to each pixel point, and detect the detection areas to obtain detection results of the detection areas. In this way, according to the reference feature data corresponding to each target candidate region and the feature data to be identified corresponding to each pixel point in each target image, the detection region is determined, and the detection region is detected, so that each pixel point in the image can be accurately detected, the detection region of an object with a smaller size is determined, and the image detection is rapidly and accurately realized.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device 100 according to the present embodiment. As shown in fig. 1, the electronic device may include an image detection apparatus 10, a memory 20, a processor 30, and a communication unit 40, where the memory 20 stores machine-readable instructions executable by the processor 30, and when the electronic device 100 is operated, the processor 30 and the memory 20 communicate with each other through a bus, and the processor 30 executes the machine-readable instructions and performs an image detection method.
The memory 20, the processor 30 and the communication unit 40 are electrically connected to each other directly or indirectly to achieve signal transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The image detection device 10 comprises at least one software functional module which may be stored in the memory 20 in the form of software or firmware (firmware). The processor 30 is configured to execute executable modules (e.g., software functional modules or computer programs included in the image detection device 10) stored in the memory 20.
The Memory 20 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
In some embodiments, processor 30 is configured to perform one or more of the functions described in this embodiment. In some embodiments, processor 30 may include one or more processing cores (e.g., a single core processor (S) or a multi-core processor (S)). By way of example only, processor 30 may include a central processing unit (Central Processing Unit, CPU), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), special instruction set Processor (Application Specific Instruction-set Processor, ASIP), graphics processing unit (Graphics Processing Unit, GPU), physical processing unit (Physics Processing Unit, PPU), digital signal Processor (Digital Signal Processor, DSP), field programmable gate array (Field Programmable Gate Array, FPGA), programmable logic device (Programmable Logic Device, PLD), controller, microcontroller unit, reduced instruction set computer (Reduced Instruction Set Computing, RISC), microprocessor, or the like, or any combination thereof.
For ease of illustration, only one processor is depicted in the electronic device 100. It should be noted, however, that the electronic device 100 in the present embodiment may also include a plurality of processors, and thus the steps performed by one processor described in the present embodiment may also be performed jointly by a plurality of processors or performed separately. For example, if the processor of the server performs step a and step B, it should be understood that step a and step B may also be performed by two different processors together or performed separately in one processor. For example, the processor performs step a, the second processor performs step B, or the processor and the second processor together perform steps a and B.
In this embodiment, the memory 20 is used for storing a program, and the processor 30 is used for executing the program after receiving an execution instruction. The method of defining a flow disclosed in any embodiment of the present invention may be applied to the processor 30, or implemented by the processor 30.
The communication unit 40 is used for establishing a communication connection between the electronic device 100 and other devices through a network, and for transceiving data through the network.
In some embodiments, the network may be any type of wired or wireless network, or a combination thereof. By way of example only, the network may include a wired network, a wireless network, a fiber optic network, a telecommunications network, an intranet, the Internet, a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN), a wireless local area network (Wireless Local Area Networks, WLAN), a metropolitan area network (Metropolitan Area Network, MAN), a wide area network (Wide Area Network, WAN), a public switched telephone network (Public Switched Telephone Network, PSTN), a Bluetooth network, a ZigBee network, a near field communication (Near Field Communication, NFC) network, or the like, or any combination thereof.
In this embodiment, the electronic device 100 may be, but is not limited to, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (Personal Digital Assistant, PDA), or the like.
It will be appreciated that the structure shown in fig. 1 is merely illustrative. The electronic device 100 may also have more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Based on the implementation architecture of fig. 1, the present embodiment provides an image detection method, which is executed by the electronic device of fig. 1, and the image detection method provided by the present embodiment is explained in detail below based on the structural diagram of the electronic device 100 provided in fig. 1. Referring to fig. 2 in combination, the image detection method provided in the present embodiment includes steps S101 to S105.
S101: acquiring at least two target images, and determining and obtaining target candidate areas in each target image; wherein, the positions and the number of the target candidate areas included in each target image are the same.
In this embodiment, when detecting the target image, the detected object may be any specific object, for example, may be a two-dimensional detection object such as a face and a pattern in the image, or may be a three-dimensional detection object such as a focus in the medical image, which is not specifically limited, and one or more detected objects may be adjusted according to the needs of those skilled in the art.
In this embodiment, at least two target images are registered, that is, at least two target images are obtained by shooting under different shooting conditions at the same position and the same coordinate system. Therefore, the image content in at least two target images is the same, and the positions and the number of the determined target candidate areas in each target image are the same.
The image detection method provided by the embodiment can be applied to face detection, when the distance between the face and the camera is large, the size of the face in the image is small, and the face area with small size in the image is difficult to detect in the image. When the object to be detected is a face in the image, the target image is obtained under different shooting conditions, or the target image can be obtained under different shooting parameters and illuminance, and the embodiment is not particularly limited. When the method is applied to face detection, the target candidate region in the target image is a large-size face region in the target image.
The image detection method provided by the embodiment can be applied to detection of an abnormal region in a medical image, and the existing detection method is difficult to detect when the abnormal region is smaller.
When the object to be detected is an abnormal region in the medical image, the target candidate region is an abnormal region with a larger size. The at least two target images are at least two groups of medical images, and can be medical images obtained by scanning physiological structures of a human body in a mode of electronic computer tomography (Computed Tomography, CT), magnetic resonance examination (Magnetic Resonance, MR), 4D ultrasonic waves and the like. The physiological structure of the human body can be heart, liver, lung, blood vessel, bone, etc. When the object to be detected is an abnormal region in the medical image, the target image is obtained under different shooting conditions, and each group of medical images can be obtained under different scanning conditions. Such as: when the heart is scanned, at least two groups of medical images of the heart are acquired under different scanning conditions such as contrast medium, scanning mode and the like. In practical applications, the manner of acquisition of each of the at least two sets of medical images may also be different, but the at least two sets of medical images are registered. Such as: there are two sets of medical images, one set obtained by means of an electronic computed tomography scan and one set obtained by means of a magnetic resonance examination scan, but the two sets of medical images are in the same coordinate system.
In this embodiment, the target candidate region represents a region where the obtained object to be measured with a larger size is located in at least two target images. When the target candidate areas of at least two target images are determined, the target candidate areas can be detected by adopting the existing detection methods such as a deep learning algorithm. However, it is difficult to detect an object to be measured having a small size by the conventional detection method. Therefore, in this embodiment, after the target candidate region is detected by the existing detection method, further detection is required.
S102: and obtaining the reference characteristic data corresponding to each target candidate region according to the pixel value of each target candidate region in each target image.
Wherein, because the acquisition conditions of the target images are different, even though the imaged images are the same pixel point, the pixel values in the target images are different; and different target candidate areas correspond to different objects to be detected, and the pixel values in the same target image are also different. Such as: when the object to be measured is an abnormal region in the medical image, the object to be measured includes regions of various lesions such as canceration, hemangioma, and the like. Therefore, the pixel values of the target candidate regions in the same target image are also different, and it is necessary to obtain the reference feature data corresponding to each target candidate region according to the pixel values of each target region in each target image.
In this embodiment, when the reference feature data corresponding to each target candidate region is obtained according to the pixel value of each target candidate region in each target image, the reference data may be an average value, a mode value, a median value, or the like of the pixel values of each target candidate region in each target image, or a pixel value of a center point of each target candidate region in each target image, which is not limited in this embodiment, and may be adjusted by those skilled in the art.
In this embodiment, if the number of target images is at least two, each reference feature data is an array of at least two elements, and each element represents a pixel value of each target candidate region in each target image. When the number of target images is large, a plurality of discrete points in each reference feature data may be constructed as a reference feature curve. In this embodiment, the format of the reference data is not limited, and the person skilled in the art can adjust the pixel value of the target candidate region according to the requirement.
S103: and aiming at each pixel point in each target image, obtaining the feature data to be identified corresponding to the pixel point according to the pixel value of the pixel point in each target image.
In this embodiment, the coordinates of each target image are identical. Therefore, similarly, the feature data to be identified obtained according to the pixel value of each pixel point in each target image can reflect the change condition of the pixel value of each pixel point in each target image.
Similarly, in this embodiment, the number of target images is at least two, and the feature data to be identified is an array of at least two elements, where each element represents a pixel value of each pixel point in each target image. When the number of the target images is large, a feature curve to be identified can be constructed by a plurality of discrete points included in each feature data to be identified. Likewise, it can be understood that the format of the data to be identified is not limited in this embodiment, as long as the pixel value of each pixel point in each target image can be represented, and those skilled in the art can adjust the data according to the requirements.
S104: and determining and obtaining a detection area according to the reference characteristic data and the characteristic data to be identified corresponding to each pixel point.
The reference feature data are feature data of a target candidate region where the object to be detected is located, and the feature data to be identified are feature data of each pixel point in each target image. According to the reference feature data and the feature data to be identified corresponding to the pixel points, the pixel points similar to the pixel values of the target candidate areas in the target images can be obtained, and therefore the detection areas are obtained. Each pixel point in the target image can be accurately detected to determine a detection area, and detection efficiency is improved.
S105: and detecting the detection area to obtain a detection result of the detection area.
In this embodiment, after the detection area is determined, the detection range is narrowed, so long as whether or not the detection object exists in the detection area is detected. When the detection area is detected, a deep learning algorithm and other modes can be adopted, and the detection area is detected by utilizing a pre-trained deep learning model, so that a detection result can be obtained rapidly. It can be appreciated that the deep learning algorithm in this embodiment is merely illustrative, and in practical application, a person skilled in the art can replace the deep learning algorithm according to the requirement, so long as the detection result can be obtained.
In this embodiment, the detection area is determined by the determined reference feature data of the target candidate area and the feature data to be identified of each pixel point in each target image, so that the detection is accurate to the pixel point in each target image, the area of the object to be detected with smaller size in the target image can be detected, and the accuracy and efficiency of image detection are improved.
Optionally, in this embodiment, the step of obtaining the reference feature data corresponding to each target candidate region according to the pixel value of each target candidate region in each target image includes:
Acquiring pixel values of the center points of the target candidate areas in the target images;
and obtaining the reference characteristic data corresponding to each target candidate region according to the pixel value of the center point of each target candidate region in each target image.
When the reference feature data corresponding to each target candidate region is obtained according to the pixel value of each target candidate region in each target image, the pixel value of the center point of each target candidate region in each target image is selected as the reference feature data for the sake of simple calculation. In this embodiment, first, a pixel value of a center point of each target candidate area in each target image is obtained, then, according to the pixel value of the center point of each target candidate area in each target image, reference feature data corresponding to each target candidate area is obtained, the reference feature data may be an array including a plurality of elements, and each element represents the pixel value of the center point of each target candidate area in each target image; when the number of target images is large, the reference feature curve may be constructed by the pixel values of the center point of each target candidate region in each target image, or any other form capable of characterizing the feature of each target region, and the embodiment is not particularly limited.
Optionally, in this embodiment, the step of determining the detection area according to each reference feature data and the feature data to be identified corresponding to each pixel point includes:
determining and obtaining target pixel points according to the reference feature data and the feature data to be identified corresponding to the pixel points;
and constructing a detection area according to the target pixel point.
In this embodiment, according to the feature data to be identified and the reference feature data corresponding to each pixel point, the target pixel point of the suspected object to be detected in each pixel point can be accurately detected, and according to the target pixel point, a detection area is constructed, so that the detection range can be reduced, and the detection efficiency can be improved.
Optionally, in this embodiment, the step of determining, according to each reference feature data and feature data to be identified corresponding to each pixel point, to obtain the target pixel point includes:
calculating the similarity of each reference feature data and the feature data to be identified corresponding to each pixel point according to the feature data to be identified corresponding to each pixel point;
detecting whether the similarity of each piece of reference characteristic data and the characteristic data to be identified corresponding to the pixel point accords with a preset threshold value;
if the similarity between the at least one piece of reference characteristic data and the to-be-identified characteristic data corresponding to the pixel point accords with a preset threshold value, setting the pixel point as a target pixel point.
The target candidate areas are multiple, and correspondingly, the target candidate areas are multiple pieces of reference characteristic data. And respectively calculating the similarity of the feature data to be identified corresponding to each pixel point and each reference feature data according to the feature data to be identified corresponding to each pixel point, comparing the pixel point with each target candidate region, and if at least one reference feature data and the feature data to be identified corresponding to the pixel point are similar to a preset threshold value, proving that the pixel point is similar to one of the objects to be detected in each target candidate region, wherein the pixel point may be the pixel point of the object to be detected.
In this embodiment, each reference feature data corresponds to a different target candidate region, that is, corresponds to a different object to be measured, and different pixel values may be provided in the target image by the different object to be measured, so that the preset threshold value of each reference feature data may be different. The preset threshold value can be empirically set, or a plurality of threshold values can be preset for testing, and the threshold value with the best performance is selected from a plurality of threshold values in a testing set. The person skilled in the art can set according to the category of the object to be tested, or can set a preset threshold comprehensively through experiments, and the embodiment is not limited specifically.
In this embodiment, when calculating the similarity between each reference feature data and the feature data to be identified corresponding to each pixel point, there are various calculation methods, which are not limited in this embodiment, as long as the similarity between the reference feature data and the feature data to be identified can be measured.
For simple calculation, in this embodiment, for each feature data to be identified, a pixel value difference between the feature data to be identified and each reference feature data in each target image is calculated, and when the pixel value difference in each target image meets a preset threshold value corresponding to any one of the reference feature data, the similarity between the feature data to be identified and the reference feature data is high, and the pixel point corresponding to the feature data to be identified is the target pixel point. For example: the number of target images is relatively large, and the target images are A, B, C, D, E and other target images respectively, and a target candidate region is determined, referring to fig. 3, a reference characteristic curve X is constructed according to the target candidate region, and a reference threshold value corresponding to the reference characteristic curve X is X. According to the pixel value of each pixel point in a plurality of target images in each target image, a plurality of characteristic curves to be identified are constructed, when the similarity between the characteristic curve Y to be identified corresponding to the Y pixel point and the reference characteristic curve X is calculated, the difference value |a1-a2| between the pixel value a2 of the Y pixel point in the image A and the pixel value a1 of the reference characteristic curve X in the image A is calculated respectively; the difference value |b1-b2| between the pixel value B2 of the y pixel point in the B image and the pixel value B1 of the reference characteristic curve X in the B image; the difference value |c1-c2| between the pixel value C2 of the y pixel point in the C image and the pixel value C1 of the reference characteristic curve X in the C image; the difference value |d1-d2| between the pixel value D2 of the y pixel point in the D image and the pixel value D1 of the reference characteristic curve X in the D image; the difference value |e1-e2| between the pixel value E2 of the y pixel point in the E image and the pixel value E1 of the reference characteristic curve X in the E image. Only when |a1-a2|, |b1-b2|, |c1-c2| when the I d1-d 2I and the I e1-e 2I are all smaller than or equal to X, the similarity between the reference characteristic curve X and the characteristic curve Y to be identified is high, and the pixel point Y is a target pixel point.
In this embodiment, when the detection area is constructed according to the target pixel point, if the detection area is constructed directly according to the target pixel point, the determined target candidate area may be used as the detection area again for detection, or the image noise is divided into the detection areas, so that the workload of detection is increased, and therefore, before the detection area is constructed, screening is required.
Based on this, in this embodiment, the step of constructing the detection area according to the target pixel point includes:
constructing a preset frame with a first preset size by taking a target pixel point as a center;
detecting whether pixel points in a preset frame meet preset conditions or not;
if so, constructing a detection area by taking the target pixel point as a center point according to a second preset size.
The preset condition means that all the pixel points in the preset frame do not belong to the pixel points of the target candidate region, the pixel points in the preset frame meeting the pixel value range of the object to be detected are continuous, and the continuous region is larger than or equal to the pixel region of the preset frame. In this embodiment, a preset frame of a first preset size is constructed to screen pixel regions such as image noise, the first pre-dimension may be set according to the requirement, and the embodiment is not particularly limited. For example: presetting a pixel region with a first size of 3*3, and judging that a target pixel point is a non-detection object such as image noise when a continuum region which accords with a pixel value range of an object to be detected is below a 3*3 pixel region in the pixel region of 3*3 with the target pixel point as a center; if the continuum area conforming to the pixel value range of the object to be measured adjacent to the target pixel point is equal to or greater than the pixel area 3*3, and the pixel region of 3*3 does not belong to any target candidate region, the target pixel point meets the preset condition.
In this embodiment, a detection area with a second preset size is constructed with the target pixel point screened by the preset condition as the center. The detection area is constructed by taking the target pixel point as the center, so that the detection area is positioned quickly, and the detection efficiency is improved. Similarly, the setting of the second preset size can be adjusted by a person skilled in the art according to the requirement, and the embodiment is not limited.
In this embodiment, the preset first size and the preset second size may be two-dimensional or three-dimensional, and may be adjusted according to different objects to be measured.
Optionally, in this embodiment, the step of determining the target candidate area in each target image includes:
and inputting each target image into a preset segmentation model, and determining and obtaining a target candidate region in each target image.
In this embodiment, a segmentation model obtained by training is used to segment the object to be detected in at least two target images. Wherein, alternatively, the segmentation model can be a convolutional neural network (Convolutional Neural Network, CNN), a recurrent neural network (Recurrent Neural Network, RNN), a bi-directional recurrent neural network (Bidirectional Recurrent Neural Network, BRNN), a gated recurrent unit (Gated Recurrent Unit, GRU), long Short Term Memory (LSTM), etc., the present embodiment is not limited, and those skilled in the art can adjust according to the requirements.
When the segmentation model is trained, the segmentation model is obtained by training in a supervised training mode, namely, an optimal model is obtained by training sample data with known true values, and all inputs are mapped into corresponding outputs by using the model.
Optionally, in this embodiment, the step of detecting the detection area to obtain a detection result of the detection area includes:
and inputting the detection area into a preset classification model to obtain a detection result of the detection area.
In this embodiment, a deep learning algorithm is adopted, and the classification model obtained through training can detect the object to be detected in the input detection area. Likewise, the classification model has a plurality of choices, is obtained by adopting a supervised training mode, the same as the aforementioned segmentation model, the description of this embodiment is omitted.
According to the image detection method provided by the embodiment of the invention, at least two target images are acquired, after target candidate areas in each target image are obtained, reference feature data corresponding to each target candidate area are obtained according to the pixel values of each target candidate area in each target image, feature data to be identified corresponding to each pixel point in each target image are obtained according to the pixel values of the pixel point in each target image, a detection area is determined according to each reference feature data and the feature data to be identified corresponding to each pixel point, and the detection result of the detection area is obtained by detecting the detection area. In this way, according to the reference feature data corresponding to each target candidate region and the feature data to be identified corresponding to each pixel point in each target image, the detection region is determined, and the detection region is detected, so that each pixel point in the image can be accurately detected, the detection region of an object with a smaller size is determined, and the image detection is rapidly and accurately realized.
Based on the same inventive concept, please refer to fig. 4 in combination, the present embodiment further provides an image detection apparatus 10, which applies the electronic device shown in fig. 1, as shown in fig. 4, and the image detection apparatus provided in this embodiment includes:
the acquisition module 11 is used for acquiring at least two target images and determining and obtaining target candidate areas in each target image; the positions and the number of the target candidate areas included in each target image are the same;
the feature construction module 12 is configured to obtain reference feature data corresponding to each target candidate region according to pixel values of each target candidate region in each target image; the method is also used for obtaining to-be-identified characteristic data corresponding to each pixel point in each target image according to the pixel value of the pixel point in each target image;
the calculating module 13 is configured to determine a detection area according to each reference feature data and feature data to be identified corresponding to each pixel point;
the detection module 14 is configured to detect the detection area, and obtain a detection result of the detection area.
In an alternative embodiment, the feature construction module 12 is configured to:
acquiring pixel values of the center points of the target candidate areas in the target images;
And obtaining the reference characteristic data corresponding to each target candidate region according to the pixel value of the center point of each target candidate region in each target image.
In an alternative embodiment, the calculation module 13 is configured to:
determining and obtaining target pixel points according to the reference feature data and the feature data to be identified corresponding to the pixel points;
and constructing a detection area according to the target pixel point.
In an alternative embodiment, the calculation module 13 is configured to:
calculating the similarity of each reference feature data and the feature data to be identified corresponding to each pixel point according to the feature data to be identified corresponding to each pixel point;
detecting whether the similarity of each piece of reference characteristic data and the characteristic data to be identified corresponding to the pixel point accords with a preset threshold value;
if the similarity between the at least one piece of reference characteristic data and the to-be-identified characteristic data corresponding to the pixel point accords with a preset threshold value, setting the pixel point as a target pixel point.
In an alternative embodiment, the calculation module 13 is configured to:
constructing a preset frame with a first preset size by taking a target pixel point as a center;
detecting whether pixel points in a preset frame meet preset conditions or not;
if so, constructing a detection area by taking the target pixel point as a center point according to a second preset size.
In an alternative embodiment, the obtaining module 11 is configured to:
and inputting each target image into a preset segmentation model, and determining and obtaining a target candidate region in each target image.
In an alternative embodiment, the detection module 14 is configured to:
and inputting the detection area into a preset classification model to obtain a detection result of the detection area.
The image detection device provided by the embodiment of the invention acquires at least two target images, determines to obtain target candidate areas in each target image, obtains reference feature data corresponding to each target candidate area according to the pixel value of each target candidate area in each target image, obtains to-be-identified feature data corresponding to each pixel point in each target image according to the pixel value of the pixel point in each target image, determines to obtain a detection area according to each reference feature data and to-be-identified feature data corresponding to each pixel point, and detects the detection area to obtain the detection result of the detection area. In this way, according to the reference feature data corresponding to each target candidate region and the feature data to be identified corresponding to each pixel point in each target image, the detection region is determined, and the detection region is detected, so that each pixel point in the image can be accurately detected, the detection region of an object with a smaller size is determined, and the image detection is rapidly and accurately realized.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific operation of the image detection apparatus 10 described above may refer to the corresponding procedure in the foregoing method, and will not be described in detail herein.
In view of the foregoing, the present embodiment provides a storage medium having a computer program stored thereon, which when executed by a processor, implements the image detection method of any one of the foregoing embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to corresponding processes in the foregoing method for specific working processes of the storage medium, and thus, redundant description is not necessary.
In summary, the image detection method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention acquire at least two target images, determine to obtain target candidate areas in each target image, obtain reference feature data corresponding to each target candidate area according to pixel values of each target candidate area in each target image, obtain to-be-identified feature data corresponding to each pixel point in each target image according to pixel values of the pixel point in each target image, determine to obtain a detection area according to each reference feature data and to-be-identified feature data corresponding to each pixel point, and detect the detection area to obtain a detection result of the detection area. The target candidate area is an area where the detected object is located, wherein the size of the target candidate area is large. In this way, according to the reference feature data corresponding to each target candidate region and the feature data to be identified corresponding to each pixel point in each target image, a detection region is determined, and detection is performed on the detection region, so that a detection result of the detection region is obtained. The detection can be accurate to each pixel point in the image, so that the detection area of the object with smaller size is determined, and the image detection can be rapidly and accurately realized.
The above description is merely illustrative of various embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention, and the invention is intended to be covered by the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. An image detection method, comprising:
acquiring at least two target images, and determining a target candidate region in each target image; the positions and the number of the target candidate areas included in each target image are the same;
obtaining reference feature data corresponding to each target candidate region according to the pixel value of each target candidate region in each target image;
aiming at each pixel point in each target image, obtaining feature data to be identified corresponding to the pixel point according to the pixel value of the pixel point in each target image;
determining a detection area according to the reference characteristic data and the characteristic data to be identified corresponding to the pixel points;
and detecting the detection area to obtain a detection result of the detection area.
2. The image detection method according to claim 1, wherein the step of obtaining the reference feature data corresponding to each target candidate region from the pixel values of each target candidate region in each target image includes:
acquiring pixel values of the center point of each target candidate region in each target image;
and obtaining the reference characteristic data corresponding to each target candidate region according to the pixel value of the center point of each target candidate region in each target image.
3. The image detection method according to claim 1, wherein the step of determining the detection area according to each of the reference feature data and the feature data to be identified corresponding to each of the pixels includes:
determining and obtaining a target pixel point according to the reference feature data and the feature data to be identified corresponding to the pixel points;
and constructing the detection area according to the target pixel point.
4. The image detection method according to claim 3, wherein the step of determining a target pixel point according to each of the reference feature data and the feature data to be identified corresponding to each of the pixel points includes:
Calculating the similarity of each piece of reference characteristic data and the characteristic data to be identified corresponding to each pixel point according to the characteristic data to be identified corresponding to each pixel point;
detecting whether the similarity of each piece of reference characteristic data and the characteristic data to be identified corresponding to the pixel point accords with a preset threshold value;
if at least one similarity between the reference feature data and the feature data to be identified corresponding to the pixel point accords with the preset threshold, setting the pixel point as a target pixel point.
5. The image detection method according to claim 3, wherein the step of constructing a detection area from the target pixel point includes:
constructing a preset frame with a first preset size by taking the target pixel point as a center;
detecting whether the pixel points in the preset frame meet preset conditions or not;
if so, taking the target pixel point as a center point, and constructing a detection area according to the second preset size.
6. The image detection method according to claim 1, wherein the step of determining a target candidate region in each of the target images includes:
inputting each target image into a preset segmentation model, and determining and obtaining a target candidate region in each target image.
7. The image detection method according to claim 1, wherein the step of detecting the detection area to obtain a detection result of the detection area includes:
and inputting the detection area into a preset classification model to obtain a detection result of the detection area.
8. An image detection apparatus, comprising:
the acquisition module is used for acquiring at least two target images and determining target candidate areas in the target images; the positions and the number of the target candidate areas included in each target image are the same;
the feature construction module is used for obtaining reference feature data corresponding to each target candidate region according to the pixel value of each target candidate region in each target image; the method is also used for obtaining to-be-identified characteristic data corresponding to each pixel point in each target image according to the pixel value of the pixel point in each target image;
the computing module is used for determining a detection area according to the reference characteristic data and the characteristic data to be identified corresponding to the pixel points;
and the detection module is used for detecting the detection area to obtain a detection result of the detection area.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image detection method according to any one of claims 1 to 7 when executing the computer program.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the image detection method of any of claims 1 to 7.
CN202310149740.0A 2023-02-13 2023-02-13 Image detection method, device, electronic equipment and storage medium Active CN116245835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310149740.0A CN116245835B (en) 2023-02-13 2023-02-13 Image detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310149740.0A CN116245835B (en) 2023-02-13 2023-02-13 Image detection method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116245835A true CN116245835A (en) 2023-06-09
CN116245835B CN116245835B (en) 2023-12-01

Family

ID=86627402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310149740.0A Active CN116245835B (en) 2023-02-13 2023-02-13 Image detection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116245835B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150278637A1 (en) * 2014-03-31 2015-10-01 Brother Kogyo Kabushiki Kaisha Image processing apparatus for generating combined image data by determining reference region
US20160078620A1 (en) * 2013-08-23 2016-03-17 Kabushiki Kaisha Toshiba Image processing apparatus and method, computer program product, and stereoscopic image display apparatus
JP2016110341A (en) * 2014-12-04 2016-06-20 キヤノン株式会社 Image processing device, image processing method and program
CN106485265A (en) * 2016-09-22 2017-03-08 深圳大学 A kind of image-recognizing method and device
CN106845361A (en) * 2016-12-27 2017-06-13 深圳大学 A kind of pedestrian head recognition methods and system
CN111476064A (en) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 Small target detection method and device, computer equipment and storage medium
CN112883827A (en) * 2021-01-28 2021-06-01 腾讯科技(深圳)有限公司 Method and device for identifying designated target in image, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078620A1 (en) * 2013-08-23 2016-03-17 Kabushiki Kaisha Toshiba Image processing apparatus and method, computer program product, and stereoscopic image display apparatus
US20150278637A1 (en) * 2014-03-31 2015-10-01 Brother Kogyo Kabushiki Kaisha Image processing apparatus for generating combined image data by determining reference region
JP2016110341A (en) * 2014-12-04 2016-06-20 キヤノン株式会社 Image processing device, image processing method and program
CN106485265A (en) * 2016-09-22 2017-03-08 深圳大学 A kind of image-recognizing method and device
CN106845361A (en) * 2016-12-27 2017-06-13 深圳大学 A kind of pedestrian head recognition methods and system
CN111476064A (en) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 Small target detection method and device, computer equipment and storage medium
CN112883827A (en) * 2021-01-28 2021-06-01 腾讯科技(深圳)有限公司 Method and device for identifying designated target in image, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116245835B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
JP6467041B2 (en) Ultrasonic diagnostic apparatus and image processing method
NL1024314C2 (en) Integrated image registration for cardiac magnetic resonance blood flow data.
US10178941B2 (en) Image processing apparatus, image processing method, and computer-readable recording device
CN113826143A (en) Feature point detection
JP2013051988A (en) Device, method and program for image processing
US9504450B2 (en) Apparatus and method for combining three dimensional ultrasound images
CN109767448B (en) Segmentation model training method and device
US11455720B2 (en) Apparatus for ultrasound diagnosis of liver steatosis using feature points of ultrasound image and remote medical-diagnosis method using the same
TW202217837A (en) Training method of image detection model, electronic equipment and computer-readable storage medium
CA3192536A1 (en) Motion-compensated laser speckle contrast imaging
CN115843373A (en) Multi-scale local level set ultrasonic image segmentation method fusing Gabor wavelets
CN104732520A (en) Cardio-thoracic ratio measuring algorithm and system for chest digital image
CN111951215A (en) Image detection method and device and computer readable storage medium
CN111681205B (en) Image analysis method, computer device, and storage medium
CN110244249B (en) Magnetic resonance scanning method, magnetic resonance scanning device, medical scanning equipment and storage medium
CN104361554A (en) Intravascular ultrasound image based automatic adventitia detection method
CN111986139B (en) Method, device and storage medium for measuring carotid intima-media thickness
CN113876345B (en) Method, apparatus, electronic device, and storage medium for identifying ischemic penumbra
CN116245835B (en) Image detection method, device, electronic equipment and storage medium
CN111696113A (en) Method and system for monitoring a biological process
CN113112473B (en) Automatic diagnosis system for human body dilated cardiomyopathy
US11944486B2 (en) Analysis method for breast image and electronic apparatus using the same
US20210251601A1 (en) Method for ultrasound imaging and related equipment
CN114757951B (en) Sign data fusion method, data fusion equipment and readable storage medium
Li et al. Active contour model-based segmentation algorithm for medical robots recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Applicant after: Shukun Technology Co.,Ltd.

Address before: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Applicant before: Shukun (Beijing) Network Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant