CN113139950A - Target object identification method and device - Google Patents

Target object identification method and device Download PDF

Info

Publication number
CN113139950A
CN113139950A CN202110501793.5A CN202110501793A CN113139950A CN 113139950 A CN113139950 A CN 113139950A CN 202110501793 A CN202110501793 A CN 202110501793A CN 113139950 A CN113139950 A CN 113139950A
Authority
CN
China
Prior art keywords
image
target
target object
judging
fragments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110501793.5A
Other languages
Chinese (zh)
Other versions
CN113139950B (en
Inventor
田之进
黄振杰
李碧丹
张俊明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huajia Software Co ltd
Guangzhou Jiadu Technology Software Development Co ltd
Guangzhou Xinke Jiadu Technology Co Ltd
PCI Technology Group Co Ltd
Original Assignee
Guangzhou Huajia Software Co ltd
Guangzhou Jiadu Technology Software Development Co ltd
Guangzhou Xinke Jiadu Technology Co Ltd
PCI Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huajia Software Co ltd, Guangzhou Jiadu Technology Software Development Co ltd, Guangzhou Xinke Jiadu Technology Co Ltd, PCI Technology Group Co Ltd filed Critical Guangzhou Huajia Software Co ltd
Priority to CN202110501793.5A priority Critical patent/CN113139950B/en
Publication of CN113139950A publication Critical patent/CN113139950A/en
Priority to PCT/CN2021/134341 priority patent/WO2022237135A1/en
Application granted granted Critical
Publication of CN113139950B publication Critical patent/CN113139950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses a method and a device for identifying a target object, wherein the method comprises the following steps: receiving image fragments sent by X-ray equipment, wherein the number of image scanning lines of the image fragments is less than a first preset threshold value; storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment, and displaying the image fragments by the target image judging equipment; when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image; and carrying out target object identification on the target image. Therefore, the image judging person can judge the image according to the image fragments without waiting for the whole image to appear and then judging the image, so that the time for manually judging the image is increased, and the efficiency and the accuracy of judging the image are improved. And after the server identifies the image, the image on the side of the image judging device is displayed completely, and the image judging person finishes judging the image, so that the image identification and the image judgment are synchronously carried out.

Description

Target object identification method and device
Technical Field
The embodiment of the application relates to a data processing technology, in particular to a method and a device for identifying a target object.
Background
The X-ray security inspection machine is widely applied to the fields of traffic, logistics and the like, and the high-speed development of the society has higher and higher requirements on security inspection speed and accuracy.
When the conventional X-ray inspection equipment used in the field of rail transit is used for remotely and intensively judging a picture, a general X-ray machine scans a package, and after a package image appears at a field image judging end, if an intelligent image recognizer is equipped, a video acquisition card of the intelligent image recognizer captures the image of a video interface of the field image judging end and transmits the image to an AI module of the image recognizer, and the AI module pushes the whole picture to the remote image judging end after finishing intelligent image recognition. And because the X-ray machine scanning, the intelligent image recognizer AI recognizing (if equipped), the on-site image judging end (if equipped), and the remote image judging end are operated in series, obvious frustration can be caused visually to the image judging person at the remote image judging end. Before the parcel image reaches the image judging end, the image judging person is either a blank screen or a previous screen, so that the scanned parcel cannot be judged, and when the current parcel image reaches the image judging end for displaying, the time left for the image judging person is too short, so that the time urgency of manual image judging is increased.
Disclosure of Invention
The application provides a target object identification method and device, which are used for solving the problems that in the prior art, when a parcel image appears, a diagraph visually generates a pause feeling and the time urgency feeling of manual diagraph is aggravated.
In a first aspect, an embodiment of the present application provides a method for identifying a target object, where the method includes:
receiving image fragments sent by X-ray equipment, wherein the number of image scanning lines of the image fragments is less than a first preset threshold value;
storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment, and displaying the image fragments by the target image judging equipment;
when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image;
and carrying out target object identification on the target image.
Optionally, the method further comprises:
acquiring a recognition result of the target object recognition;
and sending the recognition result to the target image judging device, and displaying the recognition result in a displayed image by the target image judging device.
Optionally, the method further comprises:
when the target object is judged to be the designated alarm object according to the identification result, an alarm mark is generated;
and sending the alarm mark to the target graph judging equipment, displaying the alarm mark in a displayed image by the target graph judging equipment, and sending an alarm signal.
Optionally, the sending the image fragment to a target graph judging device includes:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule;
and sending the image fragments to the target image judging device, wherein the image fragments which belong to the same target object and are identified in the preset buffer area are all distributed to the same target image judging device.
Optionally, the performing target object recognition on the target image includes:
judging whether the target image is a blank image;
if the target image is not a blank image, performing connected domain detection on the target image to obtain one or more target connected domains;
segmenting the target image into a set of image patches according to the one or more target connected domains;
performing interference removal processing on each image patch in the image patch set respectively;
and respectively inputting each image patch subjected to interference elimination into a pre-generated target detection model, and obtaining a detection result of the target object output by the target detection model aiming at each image patch.
Optionally, the performing target object identification on the target image further includes:
and judging whether the missed detection exists in each target connected domain or not according to the detection result.
Optionally, after the performing target object recognition on the target image, the method further includes:
judging whether the detected target object is a complete target object or an incomplete target object;
determining a left boundary of an incomplete target object when it is determined that the target image includes the incomplete target object;
according to the left boundary of the incomplete target object, backtracking an image scanning line of a third preset threshold value to the left to obtain left boundary information of the incomplete target object;
taking the scanning line of which the left boundary information of the incomplete target object is the right part of the starting boundary as an image fragment of the target object;
and distributing the image slices of the incomplete target object to the next determined target image judging device.
In a second aspect, an embodiment of the present application further provides an apparatus for target object identification, where the apparatus includes:
the X-ray equipment comprises an image fragment receiving module, a first image fragment processing module and a second image fragment processing module, wherein the image fragment receiving module is used for receiving image fragments sent by the X-ray equipment, and the number of image scanning lines of the image fragments is less than a first preset threshold value;
the image fragment sending module is used for storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment and displaying the image fragments by the target image judging equipment;
the image splicing module is used for splicing the stored image fragments into a target image when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value;
and the image identification module is used for carrying out target object identification on the target image.
In a third aspect, an embodiment of the present application further provides a security inspection apparatus, where the security inspection apparatus includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method of the first aspect.
The technical scheme that this application provided has following beneficial effect:
in this embodiment, the image slices transmitted by the X-ray device are image slices with the number of image scanning lines less than a first preset threshold, and after receiving the image slices, the server stores the image slices in a preset buffer area and simultaneously sends the image slices to the target image judging device for display, so that the image judging person can judge the image according to the image slices without waiting for the whole image to appear and then judge the image, thereby increasing the time for manually judging the image and improving the efficiency and accuracy of judging the image.
In addition, when the number of the image fragments stored in the preset buffer zone in the server reaches a second preset threshold value, the server can splice the stored image fragments into a target image and perform target object identification on the target image, so that after the server identifies the image, the image on the image judging side is displayed completely, and the image judging personnel finishes judging the image, thereby realizing the synchronous execution of image identification and image judgment.
Drawings
Fig. 1 is a flowchart of an embodiment of a method for identifying a target object according to an embodiment of the present application;
fig. 2 is a flowchart of an embodiment of a method for identifying a target object according to a second embodiment of the present application;
fig. 3 is a block diagram of a structure of an embodiment of an apparatus for identifying a target object according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a security inspection apparatus provided in the fourth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an embodiment of a method for identifying a target object according to an embodiment of the present application, where the embodiment may be applied to a server, and in an application scenario, the server may include a security inspection device in a security inspection scenario, and specifically may include the following steps:
step 110, receiving image slices sent by an X-ray device, where the number of image scanning lines of the image slices is less than a first preset threshold.
In one embodiment, the present embodiment may be applied to a security inspection scene, and the X-ray device may include an X-ray inspection device in a security inspection machine. When the method is implemented, the X-ray device can directly send the X-ray image scanning line in the display card buffer area to a network interface for sending on the bottom layer of the computer algorithm in the X-ray device. Further, in order to save CPU resources and improve processing efficiency, the image scanning lines are not sent line by line, but are packed and sent in a macro frame mode using a network card after forming image patches (i.e., image slices).
The image slice may be used as the transmission and processing unit of the embodiment, and as an example, the number of image scanning lines of the image slice may be set to be less than the first preset threshold. The first preset threshold may be set according to actual service requirements, which is not limited in this embodiment, and certainly, the setting of the first preset threshold should not be too large or too small, which may affect the following synchronization effect, and is too small and not beneficial to optimizing the processing efficiency. For example, the first preset threshold may be set to 50 (assuming that the conveyor speed of the security inspection machine is 0.6m/s for calculation, and the scanning frequency of the X-ray detector is 750Hz, 50 image scan lines are 1/15 seconds of the radiographic image corresponding to a 4cm parcel).
In one implementation, the server may issue a profile of scan lines to the X-ray device, where the profile may contain a configuration of the number of image scan lines per image slice, e.g., the profile may include records that combine image slices according to less than 50 scan lines. The X-ray device can transmit less than 50X-ray image scan lines as one image slice according to the profile.
And 120, storing the image fragment in a preset buffer area, sending the image fragment to a target image judging device, and displaying the image fragment by the target image judging device.
In this step, a preset buffer area for storing the image fragments may be generated in advance in the server, and each time the server receives one image fragment, the server first stores the image fragment in the preset buffer area. Then, the server may read the image fragment from the preset buffer area, and send the read image fragment to the target image judging device, and the target image judging device displays the received image fragment in real time in a scroll form, so that an image judging person of the target image judging device can judge an image in advance according to the image fragment, and time for manual image judgment is increased. Moreover, for the diagnostician, the image slices are spread in a smooth scroll, and no noticeable setback is visually produced.
In an embodiment, the step of sending the image slice to the target image judging device in step 120 may further include the following steps:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule; and sending the image fragment to the target image judging device.
In this embodiment, the specific load balancing rule is not limited, and when the target graph judging device is selected according to the load balancing rule, the load of each candidate graph judging device, the experience value of the graph judging person, the historical task execution and other factors may be considered comprehensively. For example, in a security check scenario, the working states of each candidate image determination device may be obtained, where the working states may include an idle state and a busy state; if more than one candidate graph judging device in the idle state exists, the experience values of the graph judging persons corresponding to the candidate graph judging devices in the idle state can be obtained, and the candidate graph judging device corresponding to the graph judging person with the high experience value is selected as the target graph judging device. If only one candidate graph judging device in an idle state exists, the candidate graph judging device is directly used as a target graph judging device. If the candidate image judging device in the idle state does not exist, the experience value of the image judging person corresponding to each candidate image judging device can be obtained, and the candidate image judging device corresponding to the image judging person with the high experience value is selected as the target image judging device. If the experience values of the image judging persons of the candidate image judging devices are all similar, the device with fewer processing tasks can be selected as the target image judging device.
In order to ensure complete display of objects (such as packages in a security inspection machine), image slices belonging to the same target object and identified in a preset buffer zone are all distributed to the same target image judging device.
According to the embodiment, the target graph judging device is selected from the candidate graph judging devices according to the load balancing rule, and the image fragments are distributed to the target graph judging device, so that the situation that a plurality of graph judging tasks are stacked in the same graph judging device to be processed can be avoided, and the graph judging efficiency is improved. For example, different parcel images are automatically assigned to a plurality of different image judging persons for simultaneous judgment through the embodiment, and compared with the method that the same image judging person judges a plurality of parcel images, the efficiency and the accuracy of image judgment can be improved.
And step 130, when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image.
In practice, since the time required for performing image recognition in the subsequent step 140 is substantially fixed, in order to reduce the workload of image recognition, in this embodiment, when the number of image slices in the buffer reaches the second preset threshold, the stored image slices may be spliced into the target image, and then image recognition is performed based on the target image.
In one implementation, a counter may be set in the preset buffer, and when the counter displays that the number of the image slices stored in the preset buffer reaches a second preset threshold, the image stitching operation is triggered.
Specifically, when the X-ray device generates image slices, slice serial numbers may be added to the image slices, and on the server side, the images may be spliced according to the slice serial numbers of the image slices. The embodiment does not limit the specific image stitching algorithm.
It should be noted that the second preset threshold may be determined according to an actual service requirement, which is not limited in this embodiment, for example, the second preset threshold may be set to 20.
And step 140, performing target object identification on the target image.
For example, in a security check scenario, the target object may include a package (e.g., a box or the like) in a security check conveyor, and by identifying the package in the target image, it may be determined whether the target image contains a complete package image.
In one implementation, target object recognition may be performed by a combination of traditional visual algorithms and deep learning algorithms. Step 140 may further include the steps of:
and step 140-1, judging whether the target image is a blank image.
In one embodiment, the server may determine whether the target image is a blank image by:
acquiring gray information of a target image; and if the gray information meets the specified gray condition, judging that the target image is a blank image.
Specifically, the target image may be subjected to a graying process to generate a grayscale image, and a grayscale histogram of the grayscale image may be obtained. If the gray histogram meets the specified gray condition, the target image can be judged to be a blank image.
In one example, the specified grayscale conditions may include: the number of inflection points of the envelope of the gray histogram is one, that is, if the number of inflection points of the envelope of the gray histogram of the target image is one, it is determined that the target image is a blank image.
In other examples, the specified grayscale conditions may include: and (3) judging that the target image is a blank image if the gray histogram of the current target image is close to the full-scale gray histogram. The full scale means a response when there is no target object in the acquisition environment where the image acquisition device is located, for example, in a security check scene, the full scale means a response when an X-ray directly hits a detector when there is no object, and when the gray histogram of the target image is close to the gray histogram of the image generated by the full scale response, it can be determined that the target image is a blank image.
In another embodiment, an image without a target object but with noise (such as various interferents) may be used as a blank image, and the server may determine whether the target image is a blank image by:
detecting connected domains of the target image, if one or more connected domains are detected, acquiring the areas of the one or more connected domains, and filtering out the connected domains with the areas smaller than a certain threshold value; and if no connected domain is left finally, judging that the target image is a blank image. The Connected Component is also called a Connected Component, and refers to an image area (Blob) formed by foreground pixels with the same pixel value and adjacent positions in an image.
Of course, besides the above two ways, a person skilled in the art may also recognize the blank image in other suitable ways according to the actual scene, which is not limited in this embodiment.
If the target image is a blank image, the blank image can be directly discarded, and resource waste caused by subsequent identification of the blank image is avoided. If the target image is not a blank image, then execution may continue at step 140-2.
And 140-2, if the target image is not a blank image, performing connected domain detection on the target image to obtain one or more target connected domains.
In this step, when the server determines that the target image is not a blank image, then a further detection is made, which may include connected component detection, thereby identifying the target image as one or more target connected components.
In one embodiment, step 140-2 may further include the steps of:
and 140-2-1, carrying out graying and binarization processing on the target image to obtain a binary image.
The step of removing background interference of the target image by carrying out graying and binarization processing on the target image to obtain a binary image Ibin
And 140-2-2, carrying out denoising treatment on the binary image.
The step can separate an object (in a security inspection scene, the object can be, for example, a bag in a conveyor belt) which is not tightly adhered in the image by performing denoising processing on the binary image, and the denoised binary image can be represented as Idenoise
In one example, the denoising process may include a median filtering process, a morphological erosion process, and the like.
And 140-2-3, performing connected domain analysis on the denoised binary image to obtain one or more candidate connected domains.
In this step, the server may apply a Connected Component Analysis (Connected Component Labeling) algorithm to the denoised binary image IdenoiseFinding and marking each connected region in the set of connected regions, and finally obtaining one or more candidate connected regions.
It should be noted that, the present embodiment does not limit the specific connected component analysis algorithm, and for example, the method may be Two-Pass (Two-Pass scanning) or Seed-Filling Seed Filling method, as long as one or more candidate connected components can be obtained finally.
And 140-2-4, filtering out candidate connected domains with the areas smaller than the set area threshold value, and taking the remaining connected domains as target connected domains.
In this step, after obtaining one or more candidate connected domains, the area of each candidate connected domain may be calculated, and in one implementation, the area of each candidate connected domain may be calculated using fabs (cvcontourira (c, CV _ white _ SEQ)). Of course, the area of each candidate connected component may also be calculated in other manners, for example, the number of pixels of each candidate connected component is counted as the area, or other functions (such as total, bwearea (bw)) in matlab are used to calculate the area of the connected component.
After the area of each candidate connected domain is obtained, the area of each candidate connected domain can be compared with a set area threshold value T, then the candidate connected domains with the areas smaller than T are filtered, and finally the remaining connected domains are used as target connected domains.
The target connected domain may include one or more, for example, a plurality of target connected domains may be described as: l iscoarse=*(x1,y1,w1,h1),…,(xi,yi,wi,hi) +, where i denotes the ith target connected component, (x)i,yi) Is the upper left corner coordinate of the ith target connected domain, wiAnd hiRespectively, the width and height of the ith connected domain.
Step 140-3, segmenting the target image into a set of image patches according to the one or more target connected components.
In this step, after one or more target connected domains are marked, the target image may be segmented according to the one or more target connected domains to obtain corresponding image patches, and each image patch may form an image patch set. The set of image tiles may be denoted as Isplit=*I1,…,Ii+, wherein IiAre image tiles.
And 140-4, respectively carrying out interference removal processing on each image patch in the image patch set.
In this embodiment, by performing interference removal processing on each image patch, it can be ensured that the number of connected domains corresponding to each image patch is not more than 1, thereby improving the accuracy of subsequent target object detection.
For a set of image tiles Isplit=*I1,…,IiAfter the interference of each image patch is removed, an image patch set I after the interference is removed can be obtainedd=*Id1,…,Idi+。
In one embodiment, step 140-4 may further include the steps of:
respectively detecting the connected domain of each image small piece; if the number of connected domains contained in the current image small piece is 1, the current image small piece is not processed, and the current image small piece is directly used as an image small piece subjected to interference elimination processing; and if the number of the connected domains contained in the current image small slice is more than 1, reserving the connected domain with the largest area in the current image small slice, and setting the pixels of other connected domains as background pixels to obtain the image small slice after interference removal processing.
Specifically, the connected component detection method in step 140-2 may be referred to perform connected component detection on each image patch to obtain the connected component position of each image patch. Then analyzing the number of connected domains of each image chip, if the number of connected domains contained in a certain image chip is 1, then not making interference-removing treatment on said image chip, at this time, directly using said image chip as image chip after interference-removing treatment, i.e. Idi=IiWherein, IdiTo de-disturb the processed image patches. If the number of connected domains contained in a certain image slice is more than 1, calculating the area of each connected domain, reserving the connected domain with the largest area in the current image slice, and setting the pixels of other connected domains as background pixels to obtain the image slice subjected to interference removal processing, namely IdiIs IiThe connected domain with the largest area is reserved in the image, and the pixel values in the other connected domains are set as background pixel values to obtain the image.
Where the background pixel value may be set according to the background of the image tile, for example, if the background of the image tile is white, its background pixel value may be (255 ).
And 140-5, respectively inputting the image patches subjected to interference elimination into a pre-generated target detection model, and obtaining the detection result of the target object output by the target detection model aiming at the image patches.
In the step, each image chip is subjected to interference elimination processing to obtain an image chip I subjected to interference eliminationdiLater, each I can be respectivelydiInputting the data into a pre-generated target detection model, and using the target detection model to perform detection on each IdiAnd detecting the object and outputting a corresponding detection result.
In one embodiment, the target detection model may be a deep learning based machine model, for example, the target detection model may be the YOLOv3 model.
In one implementation, assuming that the YOLOv3 model is used in a case detection scenario of a security inspection scenario, the YOLOv3 model may be trained as follows:
dividing the X-ray long image by using a vertical projection method to obtain a data set D, correspondingly labeling the D and dividing the D into a training set, a testing set and a verification set; re-clustering coordinates of 9 anchors on the D by adopting a k-means clustering algorithm to replace the default anchors of the YOLOv3 model; and optimizing the model parameters by adopting a random gradient descent algorithm until the loss function converges.
For example, in a security inspection scene, the target object may be a bag in a security inspection conveyer belt, and the detection result of the target object output by the target detection model may include position information of the bag in each image patch, which may be expressed as: l isfine=*B1,…,Bi+=**(x11,y11,w11,h11),…+,…,*…,(xij,yij,wij,hij) B is ++, whereiniAnd the information of the position of the box in the ith image chip is shown, and ij shows the jth box detected by the ith image chip.
In one embodiment, after the fine detection of the target object is performed through step 140-5, step 140 may further include the steps of:
and 140-6, judging whether the missed detection exists in each target connected domain or not according to the detection result.
In the actual processing process, there may be a case of missing detection for each target connected domain. Therefore, the missing detection situation is reduced by analyzing the missing detection of each target connected domain.
In one embodiment, the detection result may include a location of the target object; step 140-6 may further include the steps of:
judging whether a target object is detected in the current target connected domain or not according to the position of each target object; if the target object is not detected in the current target connected domain, judging that the current target connected domain has missing detection, and taking the current target connected domain as the target object; if the target object is detected in the current target connected domain, positioning an image fragment corresponding to the current target connected domain; setting a detected pixel value corresponding to the target object in an image fragment corresponding to a current target connected domain as a background pixel value, and then carrying out connected domain detection on the image fragment; if the connected domain is detected, judging that the current target connected domain has missing detection, and taking the detected connected domain as a target object.
Specifically, if the location information of the target connected domain is Li=*(xi,yi,wi,hi) And the position information of each target object is Bi=*{(xi1,yi1,wi1,hi1),…,(xij,yij,wij,hij) } + the position information (x) of each target object may be described separatelyij,yij,wij,hij) And Li=*(xi,yi,wi,hi) And + performing position matching, and judging how many target objects are contained in the position information of the current target connected domain.
Target objects will necessarily exist due to the target connected domain. If the position information of the current target connected domain contains 0 target objects, that is, no position information of the target object is overlapped with the position information of the current target connected domain, the target connected domain can be determined as a missing detection connected domain, and the missing detection connected domain is taken as a target object and added into the target object set, that is, L is addediAdding Lresult(set of target objects).
If the location information of the current target connected domain includes more than 0 target objects, that is, the location information of more than 0 target objects overlaps with the location information of the current target connected domain, the location information B of each target object included in the location information of the current target connected domain may be first determinediAdding LresultThen slicing the corresponding image IiIn (B)iSetting the pixel values in all the corresponding areas in the step (2) as background pixel values (such as (255, 255)), then carrying out coarse detection on the obtained image, if the number of connected domains obtained by the coarse detection is not 0, the connected domain is a missed detection connected domain, and adding the detected connected domain as a target object into the Lresult
And step 140-7, judging that the detected target object is a complete target object or an incomplete target object.
In this embodiment, the detection results corresponding to the target connected domains are added up to be the detection result of the whole target image. In step 140-7, the integrity of the detection result of the whole target image is analyzed, mainly to determine whether each detected target object is a complete target object or an incomplete target object in the target image. For example, in an X-ray security inspection scene, the crawler is always moving, and a target image may be generated by cutting out a bag before the bag is completely scanned, so that an integrity analysis of the bag is required.
In one embodiment, step 140-7 may further include the steps of:
acquiring the horizontal coordinate and the width of the upper left corner of the target object; judging whether the target object is positioned at the right edge of the target image or not according to the upper left corner coordinate and the width; if yes, judging that the target object is an incomplete target object; if not, the target object is judged to be a complete target object.
For example, assume that the abscissa of the upper left corner and the width of the target object are xiAnd wiThe width of the target image is W, if xi+wi+a>W, where a is a positive constant, which may be determined according to actual requirements, for example, if a is determined to be 5 according to empirical values, it indicates that the target object is located at the right edge position of the target image, and the target object is an incomplete target object (because the incomplete target object is generally located at the edge position). On the contrary, if xi+wi+a<And W, the target object is not positioned at the right edge of the target image, and the target object is a complete target object.
Step 140-8, determining a left boundary of an incomplete target object when the target image is determined to include the incomplete target object; according to the left boundary of the incomplete target object, backtracking an image scanning line of a third preset threshold value to the left to obtain left boundary information of the incomplete target object; taking the scanning line of which the left boundary information of the incomplete target object is the right part of the starting boundary as an image fragment of the target object; and distributing the image slices of the incomplete target object to the next determined target image judging device.
Specifically, if the target image contains an incomplete target object, the position information of the incomplete target object may be obtained, and the left boundary of the incomplete target object may be extracted from the position information of the incomplete target object.
After the left boundary of the incomplete target object is obtained, to avoid errors, the left boundary may be used as a starting boundary, and an image scan line traced back to the left by a third preset threshold value is used as left boundary information of the incomplete target object. For example, 20 image scan lines may be continuously taken from the left boundary to the left, and the position of the 20 th scan line is used as the left boundary information of the incomplete target object.
After the left boundary information of the incomplete target object is obtained, the image slices of the incomplete target object can be formed by using the left boundary information as a starting position and the image scanning lines of the right part of the incomplete target object. And then distributing the incomplete image fragment of the target object to a next determined target judging device, and meanwhile, continuously receiving the image fragment sent by the server from the X-ray device by the next determined target judging device so as to ensure the integrity of the target object.
In this embodiment, the image slices transmitted by the X-ray device are image slices with the number of image scanning lines less than a first preset threshold, and after receiving the image slices, the server stores the image slices in a preset buffer area and simultaneously sends the image slices to the target image judging device for display, so that the image judging person can judge the image according to the image slices without waiting for the whole image to appear and then judge the image, thereby increasing the time for manually judging the image and improving the efficiency and accuracy of judging the image.
In addition, when the number of the image fragments stored in the preset buffer zone in the server reaches a second preset threshold value, the server can splice the stored image fragments into a target image and perform target object identification on the target image, so that after the server identifies the image, the image on the image judging side is displayed completely, and the image judging personnel finishes judging the image, thereby realizing the synchronous execution of image identification and image judgment.
Example two
Fig. 2 is a flowchart of an embodiment of a method for identifying a target object according to a second embodiment of the present application, where the embodiment may be applied to a server, and specifically includes the following steps:
step 210, receiving image slices sent by an X-ray device, where the number of image scanning lines of the image slices is less than a first preset threshold.
Step 220, storing the image fragment in a preset buffer area, sending the image fragment to a target image judging device, and displaying the image fragment by the target image judging device.
Step 230, when the number of the image fragments stored in the preset buffer area reaches a second preset threshold value, splicing the stored image fragments into a target image.
Step 240, performing target object identification on the target image, and acquiring an identification result of the target object identification.
Step 250, sending the recognition result to the target image judging device, and displaying the recognition result in a displayed image by the target image judging device.
For example, the recognition result of the target object recognition on the target image may include position information of the target object. After obtaining the recognition result, the server may simultaneously transmit the recognition result to the image determination device, and the image determination device displays the recognition result in the currently displayed image. For example, after the targeting device obtains the location information of the package, the location information may be displayed in the current package image. In the process of displaying the position information, the target graph judging device may frame the position boundary of the parcel according to the position information of the parcel and display the position boundary.
And step 260, generating an alarm mark when the target object is judged to be the designated alarm object according to the identification result.
In one implementation, the recognition result may further include an object type of the target object. The server may be preset with an alarm object list, which may record the types of various alarm objects and the corresponding characteristic information. When the server judges that the object type of the target object belongs to the types of the alarm objects recorded in the alarm object list, the target object can be judged to be the designated alarm object. Otherwise, when the server determines that the object type of the target object does not belong to the types of the alarm objects recorded in the alarm object list, the server may determine that the target object is not the designated alarm object.
If the server determines that the target object is the designated alarm object, a corresponding alarm tag can be generated according to the object type of the target object. In one implementation, the alarm object list may further record an alarm flag corresponding to the type of each alarm object, and the server may directly obtain the alarm flag corresponding to the object type of the target object from the alarm object list.
Step 270, sending the alarm mark to the target graph judging device, displaying the alarm mark in a displayed image by the target graph judging device, and sending an alarm signal.
After the server obtains the alarm tag of the target object, the alarm tag can be sent to the target graph judging device. After the target judging device receives the alarm mark, the alarm mark can be displayed in the displayed image. In other embodiments, the target graph determining apparatus may further send an alarm signal according to the alarm flag, where the alarm signal may be a highlighted alarm flag, an alarm sound, or the like, and this embodiment does not limit this.
In this embodiment, the server may synchronize the recognition result of the image recognition, the alarm recognition result, and the like to the target graph judging device, thereby implementing synchronization between the graph judgment and the graph recognition.
EXAMPLE III
Fig. 3 is a block diagram of a target object recognition apparatus according to a third embodiment of the present application, where the apparatus may be located in a server, and may include the following modules:
the image fragment receiving module 310 is configured to receive an image fragment sent by an X-ray device, where the number of image scanning lines of the image fragment is less than a first preset threshold;
an image fragment sending module 320, configured to store the image fragment in a preset buffer area, send the image fragment to a target image judging device, and display the image fragment by the target image judging device;
the image splicing module 330 is configured to splice the stored image fragments into a target image when the number of the image fragments stored in the preset buffer reaches a second preset threshold;
and the image recognition module 340 is configured to perform target object recognition on the target image.
In one embodiment, the apparatus may further include the following modules:
the identification result acquisition module is used for acquiring an identification result of the target object identification;
and the identification result sending module is used for sending the identification result to the target graph judging equipment, and the target graph judging equipment displays the identification result in the displayed image.
In one embodiment, the apparatus may further include the following modules:
the alarm mark generation module is used for generating an alarm mark when the target object is judged to be the designated alarm object according to the identification result;
and the alarm mark sending module is used for sending the alarm mark to the target graph judging equipment, displaying the alarm mark in a displayed image by the target graph judging equipment and sending an alarm signal.
In an embodiment, the image fragment sending module 320 is specifically configured to:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule;
and sending the image fragments to the target image judging device, wherein the image fragments which belong to the same target object and are identified in the preset buffer area are all distributed to the same target image judging device.
In one embodiment, the image recognition module 340 may include the following sub-modules:
a blank judgment submodule for judging whether the target image is a blank image;
the connected domain detection submodule is used for detecting the connected domain of the target image to obtain one or more target connected domains if the target image is not a blank image;
the image segmentation submodule is used for segmenting the target image into an image small piece set according to the one or more target connected domains;
the interference removing processing submodule is used for respectively carrying out interference removing processing on each image small piece in the image small piece set;
and the object detection submodule is used for respectively inputting each image patch subjected to interference removal into a pre-generated target detection model and obtaining a detection result of the target object output by the target detection model aiming at each image patch.
In one embodiment, the image recognition module 340 may further include the following sub-modules:
and the missed detection analysis submodule judges whether missed detection exists in each target connected domain or not according to the detection result.
In one embodiment, the apparatus may further include the following modules:
the integrity judgment module is used for judging whether the detected target object is a complete target object or an incomplete target object;
an incomplete information intercepting module, configured to determine a left boundary of an incomplete target object when it is determined that the target image includes the incomplete target object; according to the left boundary of the incomplete target object, backtracking an image scanning line of a third preset threshold value to the left to obtain left boundary information of the incomplete target object; taking the scanning line of which the left boundary information of the incomplete target object is the right part of the starting boundary as an image fragment of the target object; and distributing the image slices of the incomplete target object to the next determined target image judging device.
The device for identifying the target object, provided by the embodiment of the application, can execute the method for identifying the target object, provided by any one of the first embodiment and the second embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a security inspection apparatus according to a fourth embodiment of the present application, where the security inspection apparatus may include a server, the server may be located inside the security inspection apparatus, may also be located outside the security inspection apparatus, may be set locally, or may be set at a remote end, and the server is in communication connection with the security inspection apparatus. As shown in fig. 4, the security check apparatus includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the security inspection device may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430, and the output device 440 in the security check device may be connected by a bus or other means, and fig. 4 illustrates the connection by the bus as an example.
The memory 420 serves as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the above-described embodiments in the embodiments of the present application. The processor 410 executes various functional applications and data processing of the security check device by executing software programs, instructions and modules stored in the memory 420, namely, a method for identifying a target object mentioned in any of the above method embodiments.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to devices/terminals/security devices via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the security device. The output device 440 may include a display device such as a display screen.
EXAMPLE five
The fifth embodiment of the present application further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for executing the method in the above-mentioned method embodiment.
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the target object identification method provided in any embodiments of the present application.
From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
It should be noted that, in the embodiment of the apparatus, the included units and modules are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. A method of target object identification, the method comprising:
receiving image fragments sent by X-ray equipment, wherein the number of image scanning lines of the image fragments is less than a first preset threshold value;
storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment, and displaying the image fragments by the target image judging equipment;
when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value, splicing the stored image fragments into a target image;
and carrying out target object identification on the target image.
2. The method of claim 1, further comprising:
acquiring a recognition result of the target object recognition;
and sending the recognition result to the target image judging device, and displaying the recognition result in a displayed image by the target image judging device.
3. The method of claim 2, further comprising:
when the target object is judged to be the designated alarm object according to the identification result, an alarm mark is generated;
and sending the alarm mark to the target graph judging equipment, displaying the alarm mark in a displayed image by the target graph judging equipment, and sending an alarm signal.
4. The method according to claim 1, 2 or 3, wherein the sending the image slice to a target graph judging device comprises:
determining target graph judging equipment from a plurality of candidate graph judging equipment according to a load balancing rule;
and sending the image fragments to the target image judging device, wherein the image fragments which belong to the same target object and are identified in the preset buffer area are all distributed to the same target image judging device.
5. The method of claim 1, wherein the performing target object recognition on the target image comprises:
judging whether the target image is a blank image;
if the target image is not a blank image, performing connected domain detection on the target image to obtain one or more target connected domains;
segmenting the target image into a set of image patches according to the one or more target connected domains;
performing interference removal processing on each image patch in the image patch set respectively;
and respectively inputting each image patch subjected to interference elimination into a pre-generated target detection model, and obtaining a detection result of the target object output by the target detection model aiming at each image patch.
6. The method of claim 5, wherein the performing target object recognition on the target image further comprises:
and judging whether the missed detection exists in each target connected domain or not according to the detection result.
7. The method of claim 1, 5 or 6, wherein after said target object recognition of said target image, said method further comprises:
judging whether the detected target object is a complete target object or an incomplete target object;
determining a left boundary of an incomplete target object when it is determined that the target image includes the incomplete target object;
according to the left boundary of the incomplete target object, backtracking an image scanning line of a third preset threshold value to the left to obtain left boundary information of the incomplete target object;
taking the scanning line of which the left boundary information of the incomplete target object is the right part of the starting boundary as an image fragment of the target object;
and distributing the image slices of the incomplete target object to the next determined target image judging device.
8. An apparatus for target object recognition, the apparatus comprising:
the X-ray equipment comprises an image fragment receiving module, a first image fragment processing module and a second image fragment processing module, wherein the image fragment receiving module is used for receiving image fragments sent by the X-ray equipment, and the number of image scanning lines of the image fragments is less than a first preset threshold value;
the image fragment sending module is used for storing the image fragments in a preset buffer area, sending the image fragments to target image judging equipment and displaying the image fragments by the target image judging equipment;
the image splicing module is used for splicing the stored image fragments into a target image when the number of the image fragments stored in the preset buffer zone reaches a second preset threshold value;
and the image identification module is used for carrying out target object identification on the target image.
9. A security inspection apparatus, characterized in that the security inspection apparatus comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202110501793.5A 2021-05-08 2021-05-08 Target object identification method and device Active CN113139950B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110501793.5A CN113139950B (en) 2021-05-08 2021-05-08 Target object identification method and device
PCT/CN2021/134341 WO2022237135A1 (en) 2021-05-08 2021-11-30 Target object recognition method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110501793.5A CN113139950B (en) 2021-05-08 2021-05-08 Target object identification method and device

Publications (2)

Publication Number Publication Date
CN113139950A true CN113139950A (en) 2021-07-20
CN113139950B CN113139950B (en) 2024-04-16

Family

ID=76816692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110501793.5A Active CN113139950B (en) 2021-05-08 2021-05-08 Target object identification method and device

Country Status (2)

Country Link
CN (1) CN113139950B (en)
WO (1) WO2022237135A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693529A (en) * 2022-04-20 2022-07-01 北京理工大学重庆创新中心 Image splicing method, device, equipment and storage medium
WO2022237135A1 (en) * 2021-05-08 2022-11-17 佳都科技集团股份有限公司 Target object recognition method and apparatus
CN115457398A (en) * 2022-11-14 2022-12-09 成都智元汇信息技术股份有限公司 Centralized graph judging client based on dual visual angles and display method

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1476811A (en) * 2002-07-29 2004-02-25 ��ʽ���綫֥ X-ray image processing device and method
JP2005084157A (en) * 2003-09-05 2005-03-31 Nec Corp Device and method for character recognition, character emphasizing method, display device, and mobile equipment
CN1910618A (en) * 2004-01-15 2007-02-07 皇家飞利浦电子股份有限公司 Stochastic analysis of cardiac function
US20110249883A1 (en) * 2010-04-09 2011-10-13 General Electric Company Methods for segmenting objects in images
CN104345350A (en) * 2013-07-23 2015-02-11 清华大学 Human body safety check method and human body safety check system
US20150071406A1 (en) * 2012-04-02 2015-03-12 Hitachi Medical Corporation Method for controlling x-ray image diagnosis apparatus and x-ray generation device
CN104483335A (en) * 2014-12-29 2015-04-01 同方威视技术股份有限公司 Graph discrimination pre-warning system and method for security inspection device, as well as security inspection device
CN107346539A (en) * 2016-05-06 2017-11-14 同方威视技术股份有限公司 For sentencing method, system and the equipment of the distribution of figure task
CN108195855A (en) * 2017-12-27 2018-06-22 同方威视技术股份有限公司 Safety check system and its method
CN109030524A (en) * 2018-09-04 2018-12-18 广州地铁设计研究院有限公司 A kind of urban track traffic safety check concentration sentences drawing system
CN109242762A (en) * 2018-08-27 2019-01-18 北京铂石空间科技有限公司 Image-recognizing method, device, storage medium and electronic equipment
WO2019096181A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Detection method, apparatus and system for security inspection, and electronic device
WO2019161578A1 (en) * 2018-02-23 2019-08-29 华讯方舟科技有限公司 Parallel human body security check system
CN110400626A (en) * 2019-07-08 2019-11-01 上海联影智能医疗科技有限公司 Image detecting method, device, computer equipment and storage medium
CN110597126A (en) * 2019-09-19 2019-12-20 广州新科佳都科技有限公司 Intelligent security check method, device, equipment and storage medium
CN111612089A (en) * 2020-05-29 2020-09-01 浙江大华技术股份有限公司 Method and equipment for marking dangerous goods in security inspection machine
CN111667464A (en) * 2020-05-21 2020-09-15 平安科技(深圳)有限公司 Dangerous goods three-dimensional image detection method and device, computer equipment and storage medium
CN111709422A (en) * 2020-06-22 2020-09-25 平安养老保险股份有限公司 Image identification method and device based on neural network and computer equipment
CN112070743A (en) * 2020-09-08 2020-12-11 北京声迅电子股份有限公司 Remote intelligent image judging system with image serialization
CN112444889A (en) * 2020-11-13 2021-03-05 北京航星机器制造有限公司 Rapid security inspection luggage remote centralized interpretation system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515772B2 (en) * 2004-08-21 2009-04-07 Xerox Corp Document registration and skew detection system
CN113139950B (en) * 2021-05-08 2024-04-16 佳都科技集团股份有限公司 Target object identification method and device

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1476811A (en) * 2002-07-29 2004-02-25 ��ʽ���綫֥ X-ray image processing device and method
JP2005084157A (en) * 2003-09-05 2005-03-31 Nec Corp Device and method for character recognition, character emphasizing method, display device, and mobile equipment
CN1910618A (en) * 2004-01-15 2007-02-07 皇家飞利浦电子股份有限公司 Stochastic analysis of cardiac function
US20110249883A1 (en) * 2010-04-09 2011-10-13 General Electric Company Methods for segmenting objects in images
US20150071406A1 (en) * 2012-04-02 2015-03-12 Hitachi Medical Corporation Method for controlling x-ray image diagnosis apparatus and x-ray generation device
CN104345350A (en) * 2013-07-23 2015-02-11 清华大学 Human body safety check method and human body safety check system
CN104483335A (en) * 2014-12-29 2015-04-01 同方威视技术股份有限公司 Graph discrimination pre-warning system and method for security inspection device, as well as security inspection device
CN107346539A (en) * 2016-05-06 2017-11-14 同方威视技术股份有限公司 For sentencing method, system and the equipment of the distribution of figure task
WO2019096181A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Detection method, apparatus and system for security inspection, and electronic device
CN108195855A (en) * 2017-12-27 2018-06-22 同方威视技术股份有限公司 Safety check system and its method
WO2019161578A1 (en) * 2018-02-23 2019-08-29 华讯方舟科技有限公司 Parallel human body security check system
CN109242762A (en) * 2018-08-27 2019-01-18 北京铂石空间科技有限公司 Image-recognizing method, device, storage medium and electronic equipment
CN109030524A (en) * 2018-09-04 2018-12-18 广州地铁设计研究院有限公司 A kind of urban track traffic safety check concentration sentences drawing system
CN110400626A (en) * 2019-07-08 2019-11-01 上海联影智能医疗科技有限公司 Image detecting method, device, computer equipment and storage medium
CN110597126A (en) * 2019-09-19 2019-12-20 广州新科佳都科技有限公司 Intelligent security check method, device, equipment and storage medium
CN111667464A (en) * 2020-05-21 2020-09-15 平安科技(深圳)有限公司 Dangerous goods three-dimensional image detection method and device, computer equipment and storage medium
CN111612089A (en) * 2020-05-29 2020-09-01 浙江大华技术股份有限公司 Method and equipment for marking dangerous goods in security inspection machine
CN111709422A (en) * 2020-06-22 2020-09-25 平安养老保险股份有限公司 Image identification method and device based on neural network and computer equipment
CN112070743A (en) * 2020-09-08 2020-12-11 北京声迅电子股份有限公司 Remote intelligent image judging system with image serialization
CN112444889A (en) * 2020-11-13 2021-03-05 北京航星机器制造有限公司 Rapid security inspection luggage remote centralized interpretation system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴华东;胡谋法;卢焕章;王阳;: "基于连通域标记的目标检测算法设计与实现", 现代电子技术, vol. 38, no. 20, pages 71 - 74 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237135A1 (en) * 2021-05-08 2022-11-17 佳都科技集团股份有限公司 Target object recognition method and apparatus
CN114693529A (en) * 2022-04-20 2022-07-01 北京理工大学重庆创新中心 Image splicing method, device, equipment and storage medium
CN114693529B (en) * 2022-04-20 2023-03-14 北京理工大学重庆创新中心 Image splicing method, device and equipment and storage medium
CN115457398A (en) * 2022-11-14 2022-12-09 成都智元汇信息技术股份有限公司 Centralized graph judging client based on dual visual angles and display method

Also Published As

Publication number Publication date
CN113139950B (en) 2024-04-16
WO2022237135A1 (en) 2022-11-17

Similar Documents

Publication Publication Date Title
CN113139950B (en) Target object identification method and device
CN110678901B (en) Information processing apparatus, information processing method, and computer-readable storage medium
WO2019096181A1 (en) Detection method, apparatus and system for security inspection, and electronic device
CN110781839A (en) Sliding window-based small and medium target identification method in large-size image
CN107862677A (en) The Tunnel Lining Cracks recognition methods of thresholding algorithm and system between a kind of class based on gradient
US8660373B2 (en) PDF de-chunking and object classification
US8218890B2 (en) Method and apparatus for cropping images
CN114255436B (en) Security check image recognition system and method based on artificial intelligence
CN111461133B (en) Express delivery surface single item name identification method, device, equipment and storage medium
CN113283439B (en) Intelligent counting method, device and system based on image recognition
JP6630341B2 (en) Optical detection of symbols
CN106530292B (en) A kind of steel strip surface defect image Fast Identification Method based on line scan camera
CN116091503B (en) Method, device, equipment and medium for discriminating panel foreign matter defects
JP6028972B2 (en) Image processing apparatus, image processing method, and image processing program
CN111163332A (en) Video pornography detection method, terminal and medium
CN113052116B (en) Ultrasonic video data processing method and device, ultrasonic equipment and storage medium
CN115512283A (en) Parcel image processing method and device, computer equipment and storage medium
CN113139952B (en) Image processing method and device
CN114255344A (en) Security check image segmentation method and device, electronic equipment and computer readable medium
KR102430120B1 (en) Apparatus and method for improving the performance of stereo-based ROI detection algorithm
CN116119283A (en) Belt longitudinal tear detection method, device and system
Wenzhong et al. Segmentation of chromosome images by mathematical morphology
CN116259092A (en) Unmanned aerial vehicle aerial photo face recognition method, system, equipment and readable storage medium
CN115086648A (en) Camera definition detection method and device
Shi A Study of Crack Detection Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052366

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant