CN113139950B - Target object identification method and device - Google Patents

Target object identification method and device Download PDF

Info

Publication number
CN113139950B
CN113139950B CN202110501793.5A CN202110501793A CN113139950B CN 113139950 B CN113139950 B CN 113139950B CN 202110501793 A CN202110501793 A CN 202110501793A CN 113139950 B CN113139950 B CN 113139950B
Authority
CN
China
Prior art keywords
image
target
target object
judging
fragments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110501793.5A
Other languages
Chinese (zh)
Other versions
CN113139950A (en
Inventor
田之进
黄振杰
李碧丹
张俊明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huajia Software Co ltd
Guangzhou Jiadu Technology Software Development Co ltd
PCI Technology Group Co Ltd
PCI Technology and Service Co Ltd
Original Assignee
Guangzhou Huajia Software Co ltd
Guangzhou Jiadu Technology Software Development Co ltd
PCI Technology Group Co Ltd
PCI Technology and Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huajia Software Co ltd, Guangzhou Jiadu Technology Software Development Co ltd, PCI Technology Group Co Ltd, PCI Technology and Service Co Ltd filed Critical Guangzhou Huajia Software Co ltd
Priority to CN202110501793.5A priority Critical patent/CN113139950B/en
Publication of CN113139950A publication Critical patent/CN113139950A/en
Priority to PCT/CN2021/134341 priority patent/WO2022237135A1/en
Application granted granted Critical
Publication of CN113139950B publication Critical patent/CN113139950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses a method and a device for identifying a target object, wherein the method comprises the following steps: receiving image fragments sent by X-ray equipment, wherein the number of image scanning lines of the image fragments is less than a first preset threshold value; storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment, and displaying the image fragments by the target image judging equipment; when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image; and carrying out target object identification on the target image. Therefore, the image judgment staff can judge the image according to the image fragments without waiting for the image judgment after the whole image appears, the time of manually judging the image is increased, and the efficiency and the accuracy of judging the image are improved. And after the server finishes identifying the image, the image on the side of the image judging device is displayed completely, and the image judging person finishes judging the image, so that synchronous carrying out of image identification and image judging is realized.

Description

Target object identification method and device
Technical Field
Embodiments of the present application relate to data processing technologies, and in particular, to a method and an apparatus for identifying a target object.
Background
X-ray security inspection machines are widely applied to the fields of traffic, logistics and the like, and the requirements of high-speed development of society on security inspection speed and accuracy are higher and higher.
When the conventional X-ray inspection equipment used in the rail transit field is used for remotely and intensively judging pictures, a general X-ray machine scans a package, after a package image appears at a site picture judging end, if an intelligent picture identifying instrument is arranged, a video acquisition card of the intelligent picture identifying instrument grabs an image of a video interface at the site picture judging end and transmits the image to an AI module of the picture identifying instrument, and the AI module pushes the whole picture to the remote picture judging end after the intelligent picture identifying is finished. The X-ray machine scans, the intelligent image recognition instrument AI recognizes images (if equipped), the on-site image judgment end (if equipped) and the remote image judgment end work in series, so that obvious pause and frustration can be caused to the image judgment staff at the remote image judgment end visually. Before the package image reaches the image judging end, the image judging person can not judge the image of the package being scanned, either facing to a blank screen or a screen of the previous image, but when the current package image reaches the image judging end for display, the time left for the image judging person is too short, and the time urgent sense of manual image judging is aggravated.
Disclosure of Invention
The application provides a target object identification method and device, which are used for solving the problems that in the prior art, when a parcel image appears, a judgment staff is visually provided with a pause feeling and the time urgent feeling of manual judgment is aggravated.
In a first aspect, an embodiment of the present application provides a method for identifying a target object, where the method includes:
receiving image fragments sent by X-ray equipment, wherein the number of image scanning lines of the image fragments is less than a first preset threshold value;
storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment, and displaying the image fragments by the target image judging equipment;
when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image;
and carrying out target object identification on the target image.
Optionally, the method further comprises:
acquiring a recognition result of the target object recognition;
and sending the identification result to the target graph judging equipment, and displaying the identification result in the displayed image by the target graph judging equipment.
Optionally, the method further comprises:
When the target object is judged to be a designated alarm object according to the identification result, generating an alarm mark;
and sending the alarm mark to the target graph judging equipment, displaying the alarm mark in the displayed image by the target graph judging equipment, and sending an alarm signal.
Optionally, the sending the image slice to the target graph judging device includes:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule;
and sending the image fragments to the target image judging device, wherein the image fragments belonging to the same target object identified in the preset buffer zone are respectively distributed to the same target image judging device.
Optionally, the performing target object recognition on the target image includes:
judging whether the target image is a blank image or not;
if the target image is not a blank image, detecting a connected domain of the target image to obtain one or more target connected domains;
dividing the target image into a set of image patches according to the one or more target connected domains;
performing interference elimination treatment on each image chip in the image chip set respectively;
And respectively inputting each image chip subjected to interference elimination into a pre-generated target detection model, and obtaining a detection result of the target detection model on a target object output by each image chip.
Optionally, the target object recognition on the target image further includes:
and judging whether each target connected domain has missed detection according to the detection result.
Optionally, after the target object recognition is performed on the target image, the method further includes:
judging whether the detected target object is a complete target object or an incomplete target object;
when the target image is judged to contain an incomplete target object, determining a left boundary of the incomplete target object;
according to the left boundary of the incomplete target object, tracing back to the left an image scanning line of a third preset threshold value to obtain left boundary information of the incomplete target object;
taking the left boundary information of the incomplete target object as the scanning line of the right part of the initial boundary as the image segmentation of the target object;
and distributing the image fragments of the incomplete target object to next determined target image judging equipment.
In a second aspect, an embodiment of the present application further provides an apparatus for identifying a target object, where the apparatus includes:
The image segmentation receiving module is used for receiving image segments sent by the X-ray equipment, and the number of image scanning lines of the image segments is less than a first preset threshold value;
the image segmentation sending module is used for storing the image segmentation in a preset buffer area, sending the image segmentation to target image judgment equipment, and displaying the image segmentation by the target image judgment equipment;
the image stitching module is used for stitching the stored image fragments into a target image when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value;
and the image recognition module is used for recognizing the target object of the target image.
In a third aspect, embodiments of the present application further provide a security inspection apparatus, where the security inspection apparatus includes:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect described above.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method of the first aspect described above.
The technical scheme provided by the application has the following beneficial effects:
in this embodiment, the image slices transmitted by the X-ray device are image slices with the number of image scan lines less than the first preset threshold, and after receiving the image slices, the server stores the image slices in a preset buffer area and simultaneously can send the image slices to the target image judging device for display, so that an image judging person can judge images according to the image slices without waiting for the image judging after the whole image appears, the time of manually judging the images is increased, and the efficiency and accuracy of image judging are improved.
In addition, when the number of the image fragments stored in the preset buffer area in the server reaches a second preset threshold value, the server can splice the stored image fragments into a target image and perform target object identification on the target image, so that after the server finishes identification on the image, the image on the image judgment device side is displayed completely, and the image judgment person finishes image judgment, thereby realizing synchronous image identification and image judgment.
Drawings
FIG. 1 is a flowchart of an embodiment of a method for target object identification according to an embodiment of the present application;
FIG. 2 is a flowchart of an embodiment of a method for identifying a target object according to a second embodiment of the present application;
FIG. 3 is a block diagram illustrating an apparatus for identifying a target object according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a security inspection device according to a fourth embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings.
Example 1
Fig. 1 is a flowchart of an embodiment of a method for identifying a target object, where the embodiment may be applied to a server, and in an application scenario, the server may include security inspection equipment in a security inspection scenario, and may specifically include the following steps:
step 110, receiving an image slice sent by an X-ray device, wherein the number of image scanning lines of the image slice is less than a first preset threshold.
In an embodiment, the present embodiment may be applied in a security inspection scenario, and the X-ray device may comprise an X-ray inspection device in a security inspection machine. When the method is realized, the X-ray equipment can directly send the X-ray image scanning line in the buffer area of the display card to the network interface for transmission at the bottom layer of the internal computer algorithm. Furthermore, in order to save CPU resources and improve processing efficiency, the image scanning lines are not transmitted line by line, but are packaged and transmitted in a macro frame mode of the network card after forming image patches (namely image fragments).
Image slicing may be used as the transmission and processing unit of the present embodiment, and as an example, the number of image scanning lines of the image slicing may be set to be less than a first preset threshold. The first preset threshold may be set according to an actual service requirement, which is not limited in this embodiment, and of course, the setting of the first preset threshold is not too large or too small, so that the later synchronization effect is affected due to too large value, and the optimization of the processing efficiency is not facilitated due to too small value. For example, the first preset threshold may be set to 50 (assuming that the conveyor belt speed of the security inspection machine is calculated to be 0.6m/s, the scanning frequency of the X-ray detector is 750Hz, and the 50 image scan lines are 1/15 seconds of radiographic images of the corresponding 4cm package).
In one implementation, the server may issue a profile of scan lines to the X-ray device, which may contain a configuration of the number of image scan lines per image slice, e.g., a record of combining image slices according to a rule of less than 50 scan lines may be included in the profile. The X-ray device may transmit less than 50X-ray image scan lines as one image slice according to the profile.
And 120, storing the image fragments in a preset buffer area, sending the image fragments to target image judgment equipment, and displaying the image fragments by the target image judgment equipment.
In this step, a preset buffer area for storing image slices may be generated in advance in the server, and after each time the server receives an image slice, the image slice is first stored in the preset buffer area. Then, the server can read the image fragments from the preset buffer zone, and send the read image fragments to the target image judgment equipment, and the target image judgment equipment displays the received image fragments in real time in a reel form, so that an image judgment staff of the target image judgment equipment can conduct image judgment in advance according to the image fragments, and the time for manually judging the image is increased. In addition, for the image judgment staff, the image fragments are unfolded in a smooth reel type, and obvious setbacks are not generated visually.
In one embodiment, the step of sending the image slices to the target graph determining apparatus in step 120 may further include the following steps:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule; and sending the image fragments to the target image judging equipment.
In this embodiment, specific load balancing rules are not limited, and when selecting a target graph determining device according to the load balancing rules, factors such as the load of each candidate graph determining device, the experience value of a graph determining member, and a history execution task may be comprehensively considered. For example, in a security check scenario, the working state of each candidate graph judging device may be obtained, where the working state may include an idle state and a busy state; if more than one candidate graph judging device in the idle state exists, the experience value of the graph judging person corresponding to each candidate graph judging device in the idle state can be obtained, and the candidate graph judging device corresponding to the graph judging person with the high experience value is selected as the target graph judging device. If only one candidate graph judging device in an idle state exists, the candidate graph judging device is directly used as a target graph judging device. If the candidate graph judging equipment in the idle state does not exist, the experience value of the graph judging personnel corresponding to each candidate graph judging equipment can be obtained, and the candidate graph judging equipment corresponding to the graph judging personnel with high experience value is selected as the target graph judging equipment. If the experience values of the diagramming staff of each candidate diagramming device are almost the same, the device with fewer processing tasks can be selected as the target diagramming device.
In order to ensure complete display of an object (such as a package in a security inspection machine), image fragments belonging to the same target object identified in a preset buffer zone are distributed to the same target graph judging equipment.
According to the embodiment, the target graph judging device is selected from the plurality of candidate graph judging devices according to the load balancing rule, and the image fragments are distributed to the target graph judging device, so that a plurality of graph judging tasks can be prevented from being piled up in the same graph judging device for processing, and the graph judging efficiency is improved. For example, through this embodiment, different parcel images are automatically assigned to multiple different image judgment operators to judge simultaneously, and compared with the same image judgment operator to judge multiple parcel images, the efficiency and accuracy of image judgment can be improved.
And step 130, when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image.
In practice, since the time required for performing the image recognition once in the subsequent step 140 is substantially fixed, in order to reduce the workload of image recognition, the embodiment may stitch the stored image slices into the target image when the number of image slices in the buffer reaches the second preset threshold, and then perform image recognition based on the target image.
In one implementation, a counter may be set in the preset buffer, and when the counter displays that the number of image slices stored in the preset buffer reaches a second preset threshold, the image stitching operation is triggered.
Specifically, when the X-ray device generates the image slices, the slice serial numbers can be added to the image slices, and on the server side, the image can be spliced according to the slice serial numbers of the image slices. The embodiment is not limited to a specific image stitching algorithm.
It should be noted that the second preset threshold may be determined according to an actual service requirement, which is not limited in this embodiment, and for example, the second preset threshold may be set to 20.
And 140, carrying out target object recognition on the target image.
For example, in a security inspection scenario, the target object may include a package (e.g., a case or bag) in a security inspection conveyor belt, and by identifying the package in the target image, it may be determined whether the target image contains a complete package image.
In one implementation, target object recognition may be performed by a combination of conventional visual algorithms and deep learning algorithms. Step 140 may further comprise the steps of:
And step 140-1, judging whether the target image is a blank image.
In one embodiment, the server may determine whether the target image is a blank image in the following manner:
acquiring gray information of a target image; and if the gray information accords with the appointed gray condition, judging that the target image is a blank image.
Specifically, the target image may be subjected to graying processing to generate a gray image, and a gray histogram of the gray image may be obtained. If the gray level histogram meets the specified gray level condition, it may be determined that the target image is a blank image.
In one example, the specified gray scale conditions may include: the number of inflection points of the gray histogram envelope is one, that is, if the number of inflection points of the envelope of the gray histogram of the target image is one, it is determined that the target image is a blank image.
In other examples, the specified gray scale conditions may include: a full-scale gray level histogram, i.e., if the gray level histogram of the current target image is close to the full-scale gray level histogram, then the target image is determined to be a blank image. The full scale means a response when no target object exists in the acquisition environment where the image acquisition device is located, for example, in a security scene, the full scale means a response when no object is directly hit by X-rays to a detector, and when the gray level histogram of the target image is close to the gray level histogram of the image generated by the full scale response, the target image can be determined to be a blank image.
In another embodiment, an image where the target object is not present but noise (such as various interferents) is present may be used as the blank image, and the server may determine whether the target image is the blank image as follows:
detecting the connected domain of the target image, if one or more connected domains are detected, obtaining the area of the one or more connected domains, and filtering out the connected domains with the area smaller than a certain threshold value; if no connected domain is left finally, the target image is judged to be a blank image. The connected Region is also called a connected Region (Connected Component), and refers to an image Region (Region, blob) formed by foreground pixels having the same pixel value and adjacent positions in the image.
Of course, in addition to the two modes, those skilled in the art may recognize the blank image in other suitable manners according to the actual scene, which is not limited in this embodiment.
If the target image is a blank image, the blank image can be directly discarded, so that resource waste caused by the subsequent recognition of the blank image is avoided. If the target image is not a blank image, then step 140-2 may continue.
And 140-2, if the target image is not a blank image, detecting the connected domain of the target image to obtain one or more target connected domains.
In this step, when the server determines that the target image is not a blank image, then further detection may include connected domain detection to identify the target image as one or more target connected domains.
In one embodiment, step 140-2 may further comprise the steps of:
and 140-2-1, carrying out graying and binarization processing on the target image to obtain a binary image.
The step removes the background interference of the target image by carrying out graying and binarization processing on the target image to obtain a binary image I bin
And 140-2-2, denoising the binary image.
This step is carried out byDenoising the binary image to separate the non-closely adhered object (under the security inspection scene, the object may be, for example, a case in a conveyor belt), where the denoised binary image may be represented as I denoise
In one example, the denoising process may include a median filtering process, a morphological erosion process, or the like.
And 140-2-3, carrying out connected domain analysis on the binary image after denoising treatment to obtain one or more candidate connected domains.
In this step, the server may denoise the processed binary image I by a connected region analysis (Connected Component Analysis, connected Component Labeling) algorithm denoise And (3) finding out and marking each connected region in the list, and finally obtaining one or more candidate connected regions.
In this embodiment, the specific connected region analysis algorithm is not limited, and may be, for example, two-Pass (Two-Pass) or Seed-Filling, so long as one or more candidate connected regions can be obtained finally.
And 140-2-4, filtering out candidate connected domains with areas smaller than the set area threshold, and taking the rest connected domains as target connected domains.
In this step, after one or more candidate connected domains are obtained, the area of each candidate connected domain may be calculated, and in one implementation, fabs (cvContourARea (c, CV_WHOLE_SEQ)) may be used to calculate the area of each candidate connected domain. Of course, the area of each candidate connected domain may also be calculated in other manners, for example, counting the number of pixels of each candidate connected domain as the area, or calculating the area of the connected domain using other functions in matlab (such as total=bwrea (BW)) or the like.
After the area of each candidate connected domain is obtained, the area of each candidate connected domain can be compared with a set area threshold T, then the candidate connected domains with the area smaller than T are filtered, and finally the rest connected domains are used as target connected domains.
The target connected domain mayIncluding one or more, for example, a plurality of target connected domains may be described as: l (L) coarse ={(x 1 ,y 1 ,w 1 ,h 1 ),…,(x i ,y i ,w i ,h i ) I represents the i-th target connected domain, (x) i ,y i ) Is the upper left corner coordinate, w, of the ith target connected domain i And h i The width and the height of the ith communicating region, respectively.
And 140-3, dividing the target image into an image patch set according to the one or more target connected domains.
In this step, after marking one or more target connected domains, the target image may be segmented according to the one or more target connected domains, so as to obtain corresponding image patches, where each image patch may form an image patch set. The set of image tiles may be represented as I split ={I 1 ,…,I i }, wherein I i Is an image patch.
And 140-4, performing interference elimination processing on each image chip in the image chip set.
In this embodiment, by performing interference elimination processing on each image patch, it is possible to ensure that the number of connected domains corresponding to each image patch does not exceed 1, thereby improving the accuracy of subsequent detection of the target object.
For image tile set I split ={I 1 ,…,I i After the interference elimination treatment of each image chip, an image chip set I after interference elimination can be obtained d ={I d1 ,…,I di }。
In one embodiment, step 140-4 may further comprise the steps of:
respectively carrying out connected domain detection on each image chip; if the number of the connected domains contained in the current image small piece is 1, the current image small piece is not processed, and the current image small piece is directly used as the image small piece after interference elimination processing; if the number of the connected domains contained in the current image small piece is larger than 1, the connected domain with the largest area in the current image small piece is reserved, and pixels of other connected domains are set as background pixels, so that the image small piece after interference elimination processing is obtained.
Specifically, the connected domain detection method of step 140-2 may be referred to for performing connected domain detection on each image die, so as to obtain the connected domain position of each image die. Then analyzing the number of connected domains of each image chip, if the number of connected domains contained in one image chip is 1, then not performing interference elimination processing on the image chip, at this time, directly using the image chip as the image chip after the interference elimination processing, namely I di =I i Wherein I di Is the image chip after the interference elimination processing. If the number of connected domains contained in a certain image chip is greater than 1, calculating the area of each connected domain, reserving the connected domain with the largest area in the current image chip, setting the pixels of other connected domains as background pixels, and obtaining the image chip subjected to interference removal processing, namely, I di Is I i The largest area connected domain is reserved, and the pixel values in the rest connected domains are set as background pixel values.
Wherein the background pixel value may be set according to the background of the image tile, e.g. if the background of the image tile is white, its background pixel value may be (255 ).
And 140-5, respectively inputting each image chip subjected to interference removal into a pre-generated target detection model, and obtaining a detection result of the target detection model on the target object output by each image chip.
In this step, each image chip is subjected to interference elimination processing to obtain an image chip I after interference elimination di Thereafter, each I can be individually determined di Input into a target detection model generated in advance, and each I is detected by the target detection model di And detecting the object and outputting a corresponding detection result.
In one embodiment, the object detection model may be a deep learning based machine model, for example, the object detection model may be a YOLOv3 model.
In one implementation, assuming that the YOLOv3 model is used in a case detection scenario of a security scene, the YOLOv3 model may be trained as follows:
dividing an X-ray long graph by using a vertical projection method to obtain a data set D, and correspondingly marking the D and dividing the D into a training set, a test set and a verification set; re-clustering the coordinates of 9 anchors on the D by adopting a k-means clustering algorithm to replace the default anchor of the YOLOv3 model; and optimizing model parameters by adopting a random gradient descent algorithm until the loss function converges.
Illustratively, in the security inspection scene, the target object may be a case in the security inspection conveyor belt, and the detection result of the target object output by the target detection model may include position information of the case in each image patch, and may be expressed as: l (L) fine ={B 1 ,…,B i }={{(x 11 ,y 11 ,w 11 ,h 11 ),…},…,{…,(x ij ,y ij ,w ij ,h ij ) }, wherein B i Indicating the position information of the box in the ith image patch, ij indicating the j-th box detected by the ith image patch.
In one embodiment, after the target object is finely detected by step 140-5, step 140 may further include the steps of:
and 140-6, judging whether the connected domain of each target has missed detection according to the detection result.
In the actual processing, there may be a case where detection is missed for each target connected domain. Therefore, the present embodiment reduces the miss condition by miss analysis of each target connected domain.
In one embodiment, the detection result may include a location of the target object; step 140-6 may further comprise the steps of:
judging whether the target object is detected in the current target communication domain according to the position of each target object; if the current target connected domain does not detect the target object, judging that the current target connected domain has missed detection, and taking the current target connected domain as the target object; if the current target connected domain detects a target object, positioning an image fragment corresponding to the current target connected domain; setting a detected pixel value corresponding to the target object in an image fragment corresponding to the current target connected domain as a background pixel value, and then detecting the connected domain of the image fragment; if the connected domain is detected, judging that the current target connected domain has missed detection, and taking the detected connected domain as a target object.
Specifically, if the location information of the target connected domain is L i ={(x i ,y i ,w i ,h i ) Position information of each target object is B i ={{(x i1 ,y i1 ,w i1 ,h i1 ),…,(x ij ,y ij ,w ij ,h ij ) -position information (x) of each target object ij ,y ij ,w ij ,h ij ) And L is equal to i ={(x i ,y i ,w i ,h i ) And performing position matching, and judging how many target objects the position information of the current target connected domain contains.
The target object is necessarily present due to the target connected domain. If the position information of the current target connected domain contains 0 target objects, i.e. no position information of the target objects overlaps with the position information of the current target connected domain, the target connected domain can be determined as a missed connected domain, and the missed connected domain is added as a target object to the target object set, i.e. L i Adding L result (set of target objects).
If the position information of the current target connected domain contains more than 0 target objects, that is, the position information of more than 0 target objects overlaps with the position information of the current target connected domain, the position information B of each target object contained in the position information of the current target connected domain can be first i Adding L result Then the corresponding image is sliced I i B in (B) i Setting the pixel values in all the corresponding regions as background pixel values (such as (255, 255)), performing coarse detection on the obtained image, if the number of connected domains obtained by coarse detection is not 0, the connected domains are undetected connected domains, and taking the detected connected domains as target objects Adding L result
And 140-7, judging whether the detected target object is a complete target object or an incomplete target object.
In this embodiment, the detection results corresponding to the respective target connected domains are added up to be the detection result of the entire target image. In step 140-7, the integrity analysis is performed on the detection result of the whole target image, mainly to determine whether each detected target object is a complete target object or an incomplete target object in the target image. For example, in an X-ray security scene, the crawler is moving all the way around, and the case may be cut without scanning completely to generate the target image, so that the case integrity analysis is required.
In one embodiment, step 140-7 may further comprise the steps of:
acquiring the left upper corner abscissa and the width of the target object; judging whether the target object is positioned at the right edge position of the target image according to the left upper corner coordinate and the width; if yes, judging that the target object is an incomplete target object; if not, judging that the target object is a complete target object.
For example, assume that the upper left-hand abscissa and width of the target object are x, respectively i And w i The width of the target image is W, if x i +w i +a>W, where a is a positive constant, which may be determined according to actual requirements, e.g., a is determined to be 5 according to an empirical value, which indicates that the target object is located at the right edge position of the target image, and the target object is an incomplete target object (since the incomplete target object is generally located at the edge position). Conversely, if x i +w i +a<W, the target object is not positioned at the right edge position of the target image, and the target object is a complete target object.
140-8, when the target image is judged to contain an incomplete target object, determining a left boundary of the incomplete target object; according to the left boundary of the incomplete target object, tracing back to the left an image scanning line of a third preset threshold value to obtain left boundary information of the incomplete target object; taking the left boundary information of the incomplete target object as the scanning line of the right part of the initial boundary as the image segmentation of the target object; and distributing the image fragments of the incomplete target object to next determined target image judging equipment.
Specifically, if the target image contains an incomplete target object, the position information of the incomplete target object may be obtained, and the left boundary of the incomplete target object may be extracted from the position information of the incomplete target object.
After the left boundary of the incomplete target object is obtained, in order to avoid errors, the left boundary may be taken as a starting boundary, and an image scanning line with a third preset threshold may be traced back to the left as left boundary information of the incomplete target object. For example, 20 image scan lines may be continued from the left boundary to the left, and the position to the 20 th scan line may be used as the left boundary information of the incomplete target object.
After the left boundary information of the incomplete target object is obtained, the left boundary information can be used as a starting position, and the image scanning line of the right part of the left boundary information can be used for forming the image segmentation of the incomplete target object. And then distributing the image fragments of the incomplete target object to next determined target image judging equipment, and simultaneously, continuously receiving the image fragments sent by the X-ray equipment and sent by the server by the next determined target image judging equipment so as to ensure the integrity of the target object.
In this embodiment, the image slices transmitted by the X-ray device are image slices with the number of image scan lines less than the first preset threshold, and after receiving the image slices, the server stores the image slices in a preset buffer area and simultaneously can send the image slices to the target image judging device for display, so that an image judging person can judge images according to the image slices without waiting for the image judging after the whole image appears, the time of manually judging the images is increased, and the efficiency and accuracy of image judging are improved.
In addition, when the number of the image fragments stored in the preset buffer area in the server reaches a second preset threshold value, the server can splice the stored image fragments into a target image and perform target object identification on the target image, so that after the server finishes identification on the image, the image on the image judgment device side is displayed completely, and the image judgment person finishes image judgment, thereby realizing synchronous image identification and image judgment.
Example two
Fig. 2 is a flowchart of an embodiment of a method for identifying a target object according to a second embodiment of the present application, where the embodiment may be applied to a server, and specifically may include the following steps:
step 210, receiving an image slice sent by an X-ray device, where the number of image scan lines of the image slice is less than a first preset threshold.
And 220, storing the image fragments in a preset buffer area, sending the image fragments to target image judgment equipment, and displaying the image fragments by the target image judgment equipment.
And 230, when the number of the image slices stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image slices into a target image.
And step 240, performing target object recognition on the target image, and acquiring a recognition result of the target object recognition.
And step 250, the identification result is sent to the target graph judging equipment, and the target graph judging equipment displays the identification result in the displayed image.
For example, the recognition result of the target object recognition on the target image may include position information of the target object. After the server obtains the identification result, the identification result can be simultaneously sent to the target graph judging device, and the target graph judging device displays the identification result in the current displayed image. For example, after the target decision device obtains the location information of the package, the location information may be displayed in the current package image. In the process of displaying the position information, the target graph judging device can frame and display the position boundary of the package according to the position information of the package.
And 260, when the target object is judged to be a designated alarm object according to the identification result, generating an alarm mark.
In one implementation, the identification result may further include an object type of the target object. An alarm object list can be preset in the server, and the alarm object list can record types of various alarm objects and corresponding characteristic information. When the server determines that the object type of the target object belongs to the type of the alarm object recorded in the alarm object list, the target object may be determined as the designated alarm object. Otherwise, when the server determines that the object type of the target object does not belong to the type of the alarm object recorded in the alarm object list, it may be determined that the target object is not the designated alarm object.
If the server determines that the target object is a specified alarm object, a corresponding alarm mark can be generated according to the object type of the target object. In one implementation, the alarm mark corresponding to the type of each alarm object may also be recorded in the alarm object list, and the server may directly obtain the alarm mark corresponding to the object type of the target object from the alarm object list.
And step 270, the warning mark is sent to the target graph judging equipment, and the target graph judging equipment displays the warning mark in the displayed image and sends a warning signal.
After the server obtains the alarm mark of the target object, the alarm mark can be sent to the target graph judging device. After the target graph judging device receives the warning mark, the warning mark can be displayed in the displayed image. In other embodiments, the target graph determining device may also send an alarm signal according to the alarm mark, where the alarm signal may be a highlight alarm mark, send an alarm sound, and the like, and this embodiment is not limited thereto.
In this embodiment, the server may synchronize the recognition result of image recognition, the alarm recognition result, and the like to the target graph determining device, so as to achieve synchronization of graph determining and graph recognizing.
Example III
Fig. 3 is a block diagram of an embodiment of a device for identifying a target object according to a third embodiment of the present application, where the device may be located in a server, and may include the following modules:
an image slice receiving module 310, configured to receive image slices sent by an X-ray device, where the number of image scan lines of the image slices is less than a first preset threshold;
the image slice sending module 320 is configured to store the image slice in a preset buffer area, send the image slice to a target image judgment device, and display the image slice by the target image judgment device;
an image stitching module 330, configured to stitch the stored image slices into a target image when the number of image slices stored in the preset buffer reaches a second preset threshold;
and the image recognition module 340 is configured to perform target object recognition on the target image.
In one embodiment, the apparatus may further comprise the following modules:
the recognition result acquisition module is used for acquiring a recognition result of the target object recognition;
and the identification result sending module is used for sending the identification result to the target graph judging equipment, and the target graph judging equipment displays the identification result in the displayed image.
In one embodiment, the apparatus may further comprise the following modules:
the alarm mark generation module is used for generating an alarm mark when the target object is judged to be a designated alarm object according to the identification result;
and the alarm mark sending module is used for sending the alarm mark to the target graph judging equipment, displaying the alarm mark in the displayed image by the target graph judging equipment and sending an alarm signal.
In one embodiment, the image tile sending module 320 is specifically configured to:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule;
and sending the image fragments to the target image judging device, wherein the image fragments belonging to the same target object identified in the preset buffer zone are respectively distributed to the same target image judging device.
In one embodiment, the image recognition module 340 may include the following sub-modules:
the blank judging sub-module is used for judging whether the target image is a blank image or not;
the connected domain detection sub-module is used for carrying out connected domain detection on the target image if the target image is not a blank image so as to obtain one or more target connected domains;
An image segmentation sub-module for segmenting the target image into image patch sets according to the one or more target connected domains;
the interference elimination processing sub-module is used for respectively carrying out interference elimination processing on each image chip in the image chip set;
the object detection sub-module is used for respectively inputting each image chip subjected to interference removal into a pre-generated target detection model, and obtaining a detection result of the target detection model on the target object output by each image chip.
In one embodiment, the image recognition module 340 may further include the following sub-modules:
and the missed detection analysis submodule judges whether the missed detection exists in each target connected domain according to the detection result.
In one embodiment, the apparatus may further comprise the following modules:
the integrity judging module is used for judging whether the detected target object is a complete target object or an incomplete target object;
the incomplete information intercepting module is used for determining the left boundary of the incomplete target object when the target image is judged to contain the incomplete target object; according to the left boundary of the incomplete target object, tracing back to the left an image scanning line of a third preset threshold value to obtain left boundary information of the incomplete target object; taking the left boundary information of the incomplete target object as the scanning line of the right part of the initial boundary as the image segmentation of the target object; and distributing the image fragments of the incomplete target object to next determined target image judging equipment.
The device for identifying the target object provided by the embodiment of the application can execute the method for identifying the target object provided by any one of the first embodiment and the second embodiment of the application, and has the corresponding functional module and beneficial effects of executing the method.
Example IV
Fig. 4 is a schematic structural diagram of a security inspection device provided in a fourth embodiment of the present application, where the security inspection device may include a server, where the server may be located inside the security inspection device, may also be located outside the security inspection device, may be located locally, may also be located at a remote end, and is in communication connection with the security inspection device. As shown in fig. 4, the security inspection apparatus includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of processors 410 in the security device may be one or more, one processor 410 being taken as an example in fig. 4; the processor 410, memory 420, input means 430 and output means 440 in the security device may be connected by a bus or other means, in fig. 4 by way of example.
The memory 420 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and modules, such as program instructions/modules corresponding to the above embodiments in the embodiments of the present application. The processor 410 executes various functional applications of the security device and data processing, i.e. the method of achieving target object identification as mentioned in any of the method embodiments described above, by running software programs, instructions and modules stored in the memory 420.
Memory 420 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for functionality; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 420 may further include memory remotely located with respect to processor 410, which may be connected to the device/terminal/security device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Input device 430 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the security device. The output 440 may include a display device such as a display screen.
Example five
A fifth embodiment of the present application also provides a storage medium containing computer-executable instructions for performing the method of the above-described method embodiments when executed by a computer processor.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present application is not limited to the method operations described above, and may also perform the related operations in the target object identification method provided in any embodiment of the present application.
From the above description of embodiments, it will be clear to a person skilled in the art that the present application may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, etc., including several instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
It should be noted that, in the embodiment of the apparatus, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, so long as the corresponding function can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present application.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. Those skilled in the art will appreciate that the present application is not limited to the particular embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Therefore, while the present application has been described in connection with the above embodiments, the present application is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, the scope of which is defined by the scope of the appended claims.

Claims (9)

1. A method of target object identification, the method comprising:
receiving image fragments sent by X-ray equipment, wherein the number of image scanning lines of the image fragments is less than a first preset threshold value;
Storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment, and displaying the image fragments by the target image judging equipment;
when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image;
performing target object identification on the target image;
the sending the image fragments to the target image judging device comprises the following steps:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule;
and sending the image fragments to the target image judging device, wherein the image fragments belonging to the same target object identified in the preset buffer zone are respectively distributed to the same target image judging device.
2. The method according to claim 1, wherein the method further comprises:
acquiring a recognition result of the target object recognition;
and sending the identification result to the target graph judging equipment, and displaying the identification result in the displayed image by the target graph judging equipment.
3. The method according to claim 2, wherein the method further comprises:
When the target object is judged to be a designated alarm object according to the identification result, generating an alarm mark;
and sending the alarm mark to the target graph judging equipment, displaying the alarm mark in the displayed image by the target graph judging equipment, and sending an alarm signal.
4. The method of claim 1, wherein said performing object recognition on said object image comprises:
judging whether the target image is a blank image or not;
if the target image is not a blank image, detecting a connected domain of the target image to obtain one or more target connected domains;
dividing the target image into a set of image patches according to the one or more target connected domains;
performing interference elimination treatment on each image chip in the image chip set respectively;
and respectively inputting each image chip subjected to interference elimination into a pre-generated target detection model, and obtaining a detection result of the target detection model on a target object output by each image chip.
5. The method of claim 4, wherein said performing object recognition on said object image further comprises:
And judging whether each target connected domain has missed detection according to the detection result.
6. The method according to claim 1 or 4 or 5, wherein after said target object recognition of said target image, the method further comprises:
judging whether the detected target object is a complete target object or an incomplete target object;
when the target image is judged to contain an incomplete target object, determining a left boundary of the incomplete target object;
according to the left boundary of the incomplete target object, tracing back to the left an image scanning line of a third preset threshold value to obtain left boundary information of the incomplete target object;
taking the left boundary information of the incomplete target object as the scanning line of the right part of the initial boundary as the image segmentation of the target object;
and distributing the image fragments of the incomplete target object to next determined target image judging equipment.
7. An apparatus for target object recognition, the apparatus comprising:
the image segmentation receiving module is used for receiving image segments sent by the X-ray equipment, and the number of image scanning lines of the image segments is less than a first preset threshold value;
The image segmentation sending module is used for storing the image segmentation in a preset buffer area, sending the image segmentation to target image judgment equipment, and displaying the image segmentation by the target image judgment equipment;
the image stitching module is used for stitching the stored image fragments into a target image when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value;
the image recognition module is used for recognizing the target object of the target image;
the image fragment sending module is specifically used for determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule;
and sending the image fragments to the target image judging device, wherein the image fragments belonging to the same target object identified in the preset buffer zone are respectively distributed to the same target image judging device.
8. A security inspection device, characterized in that the security inspection device comprises:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202110501793.5A 2021-05-08 2021-05-08 Target object identification method and device Active CN113139950B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110501793.5A CN113139950B (en) 2021-05-08 2021-05-08 Target object identification method and device
PCT/CN2021/134341 WO2022237135A1 (en) 2021-05-08 2021-11-30 Target object recognition method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110501793.5A CN113139950B (en) 2021-05-08 2021-05-08 Target object identification method and device

Publications (2)

Publication Number Publication Date
CN113139950A CN113139950A (en) 2021-07-20
CN113139950B true CN113139950B (en) 2024-04-16

Family

ID=76816692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110501793.5A Active CN113139950B (en) 2021-05-08 2021-05-08 Target object identification method and device

Country Status (2)

Country Link
CN (1) CN113139950B (en)
WO (1) WO2022237135A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139950B (en) * 2021-05-08 2024-04-16 佳都科技集团股份有限公司 Target object identification method and device
CN114693529B (en) * 2022-04-20 2023-03-14 北京理工大学重庆创新中心 Image splicing method, device and equipment and storage medium
CN115457398B (en) * 2022-11-14 2023-04-07 成都智元汇信息技术股份有限公司 Centralized graph judging client based on dual visual angles and display method

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1476811A (en) * 2002-07-29 2004-02-25 ��ʽ���綫֥ X-ray image processing device and method
JP2005084157A (en) * 2003-09-05 2005-03-31 Nec Corp Device and method for character recognition, character emphasizing method, display device, and mobile equipment
CN1910618A (en) * 2004-01-15 2007-02-07 皇家飞利浦电子股份有限公司 Stochastic analysis of cardiac function
CN104345350A (en) * 2013-07-23 2015-02-11 清华大学 Human body safety check method and human body safety check system
CN104483335A (en) * 2014-12-29 2015-04-01 同方威视技术股份有限公司 Graph discrimination pre-warning system and method for security inspection device, as well as security inspection device
CN107346539A (en) * 2016-05-06 2017-11-14 同方威视技术股份有限公司 For sentencing method, system and the equipment of the distribution of figure task
CN108195855A (en) * 2017-12-27 2018-06-22 同方威视技术股份有限公司 Safety check system and its method
CN109030524A (en) * 2018-09-04 2018-12-18 广州地铁设计研究院有限公司 A kind of urban track traffic safety check concentration sentences drawing system
CN109242762A (en) * 2018-08-27 2019-01-18 北京铂石空间科技有限公司 Image-recognizing method, device, storage medium and electronic equipment
WO2019096181A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Detection method, apparatus and system for security inspection, and electronic device
WO2019161578A1 (en) * 2018-02-23 2019-08-29 华讯方舟科技有限公司 Parallel human body security check system
CN110400626A (en) * 2019-07-08 2019-11-01 上海联影智能医疗科技有限公司 Image detecting method, device, computer equipment and storage medium
CN110597126A (en) * 2019-09-19 2019-12-20 广州新科佳都科技有限公司 Intelligent security check method, device, equipment and storage medium
CN111612089A (en) * 2020-05-29 2020-09-01 浙江大华技术股份有限公司 Method and equipment for marking dangerous goods in security inspection machine
CN111667464A (en) * 2020-05-21 2020-09-15 平安科技(深圳)有限公司 Dangerous goods three-dimensional image detection method and device, computer equipment and storage medium
CN111709422A (en) * 2020-06-22 2020-09-25 平安养老保险股份有限公司 Image identification method and device based on neural network and computer equipment
CN112070743A (en) * 2020-09-08 2020-12-11 北京声迅电子股份有限公司 Remote intelligent image judging system with image serialization
CN112444889A (en) * 2020-11-13 2021-03-05 北京航星机器制造有限公司 Rapid security inspection luggage remote centralized interpretation system and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515772B2 (en) * 2004-08-21 2009-04-07 Xerox Corp Document registration and skew detection system
US8300938B2 (en) * 2010-04-09 2012-10-30 General Electric Company Methods for segmenting objects in images
CN104135934B (en) * 2012-04-02 2016-12-28 株式会社日立制作所 X-ray imaging apparatus and the control method of X-ray generator
CN113139950B (en) * 2021-05-08 2024-04-16 佳都科技集团股份有限公司 Target object identification method and device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1476811A (en) * 2002-07-29 2004-02-25 ��ʽ���綫֥ X-ray image processing device and method
JP2005084157A (en) * 2003-09-05 2005-03-31 Nec Corp Device and method for character recognition, character emphasizing method, display device, and mobile equipment
CN1910618A (en) * 2004-01-15 2007-02-07 皇家飞利浦电子股份有限公司 Stochastic analysis of cardiac function
CN104345350A (en) * 2013-07-23 2015-02-11 清华大学 Human body safety check method and human body safety check system
CN104483335A (en) * 2014-12-29 2015-04-01 同方威视技术股份有限公司 Graph discrimination pre-warning system and method for security inspection device, as well as security inspection device
CN107346539A (en) * 2016-05-06 2017-11-14 同方威视技术股份有限公司 For sentencing method, system and the equipment of the distribution of figure task
WO2019096181A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Detection method, apparatus and system for security inspection, and electronic device
CN108195855A (en) * 2017-12-27 2018-06-22 同方威视技术股份有限公司 Safety check system and its method
WO2019161578A1 (en) * 2018-02-23 2019-08-29 华讯方舟科技有限公司 Parallel human body security check system
CN109242762A (en) * 2018-08-27 2019-01-18 北京铂石空间科技有限公司 Image-recognizing method, device, storage medium and electronic equipment
CN109030524A (en) * 2018-09-04 2018-12-18 广州地铁设计研究院有限公司 A kind of urban track traffic safety check concentration sentences drawing system
CN110400626A (en) * 2019-07-08 2019-11-01 上海联影智能医疗科技有限公司 Image detecting method, device, computer equipment and storage medium
CN110597126A (en) * 2019-09-19 2019-12-20 广州新科佳都科技有限公司 Intelligent security check method, device, equipment and storage medium
CN111667464A (en) * 2020-05-21 2020-09-15 平安科技(深圳)有限公司 Dangerous goods three-dimensional image detection method and device, computer equipment and storage medium
CN111612089A (en) * 2020-05-29 2020-09-01 浙江大华技术股份有限公司 Method and equipment for marking dangerous goods in security inspection machine
CN111709422A (en) * 2020-06-22 2020-09-25 平安养老保险股份有限公司 Image identification method and device based on neural network and computer equipment
CN112070743A (en) * 2020-09-08 2020-12-11 北京声迅电子股份有限公司 Remote intelligent image judging system with image serialization
CN112444889A (en) * 2020-11-13 2021-03-05 北京航星机器制造有限公司 Rapid security inspection luggage remote centralized interpretation system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于连通域标记的目标检测算法设计与实现;戴华东;胡谋法;卢焕章;王阳;;现代电子技术;第38卷(第20期);第71-74页 *

Also Published As

Publication number Publication date
WO2022237135A1 (en) 2022-11-17
CN113139950A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN113139950B (en) Target object identification method and device
CN109142366B (en) Spherical shell defect detection method and device and computer readable storage medium
CN110781839A (en) Sliding window-based small and medium target identification method in large-size image
CN110298802B (en) Pavement crack detection method and device
US8660373B2 (en) PDF de-chunking and object classification
US20130120125A1 (en) Method and system for lane departure warning
CN114255436B (en) Security check image recognition system and method based on artificial intelligence
CN107622501B (en) Boundary detection method for medical image
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN113420673B (en) Garbage classification method, device, equipment and storage medium
CN116091503B (en) Method, device, equipment and medium for discriminating panel foreign matter defects
CN111598033A (en) Cargo positioning method, device and system and computer readable storage medium
CN106530292A (en) Strip steel surface defect image rapid identification method based on line scanning camera
CN113139952B (en) Image processing method and device
JP6831396B2 (en) Video monitoring device
CN113052116B (en) Ultrasonic video data processing method and device, ultrasonic equipment and storage medium
Sandhu et al. Automatic edge detection algorithm and area calculation for flame and fire images
CN115116083A (en) Method, system and storage medium for automatically identifying and correcting electric power graphics primitives of monitoring picture
CN116543327A (en) Method, device, computer equipment and storage medium for identifying work types of operators
US20060269132A1 (en) Apparatus and method for detecting white areas within windows and selectively merging the detected white areas into the enclosing window
CN114255344A (en) Security check image segmentation method and device, electronic equipment and computer readable medium
CN112668676A (en) Package security inspection image processing method, electronic device and storage medium
JP2022056219A (en) Information processor, method for processing information, and program
US20060269142A1 (en) Apparatus and method for subtyping window elements in a document windowing system
CN113706556B (en) Rapid detection and replacement method for image slice edge fragments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052366

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant