CN111294594B - Security inspection method, device, system and storage medium - Google Patents

Security inspection method, device, system and storage medium Download PDF

Info

Publication number
CN111294594B
CN111294594B CN202010120295.1A CN202010120295A CN111294594B CN 111294594 B CN111294594 B CN 111294594B CN 202010120295 A CN202010120295 A CN 202010120295A CN 111294594 B CN111294594 B CN 111294594B
Authority
CN
China
Prior art keywords
coordinate
data
video frame
packet
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010120295.1A
Other languages
Chinese (zh)
Other versions
CN111294594A (en
Inventor
叶希立
余鸿浩
夏煌帅
李站
夏灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huashi Zhijian Technology Co ltd
Original Assignee
Zhejiang Huashi Zhijian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huashi Zhijian Technology Co ltd filed Critical Zhejiang Huashi Zhijian Technology Co ltd
Priority to CN202010120295.1A priority Critical patent/CN111294594B/en
Publication of CN111294594A publication Critical patent/CN111294594A/en
Application granted granted Critical
Publication of CN111294594B publication Critical patent/CN111294594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a security inspection method, a security inspection device, a security inspection system and a storage medium, which are used for reducing data processing amount in the security inspection process. The method comprises the following steps: acquiring video data in a luggage security check process; the video data comprises YUV data of each frame of video; extracting Y data in YUV data of a current video frame; wherein the Y data comprises Y components of each pixel unit in the current video frame; according to a preset mapping relation, sequentially corresponding the Y component of each pixel unit in the current video frame to a coordinate point in a coordinate space to obtain a line packet coordinate area of each line packet in the coordinate space; each row packet coordinate area comprises a coordinate point corresponding to a Y component smaller than a first threshold value; and acquiring YUV data corresponding to the line packet coordinate area corresponding to each line packet in the current video frame, and performing security inspection on the line packet in the current video frame according to the YUV data corresponding to the line packet coordinate area of each line packet.

Description

Security inspection method, device, system and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a security inspection method, apparatus, system, and storage medium.
Background
With the continuous development of computer technology, computer technology is widely applied to various aspects, such as in the field of security inspection. The current security inspection method generally comprises the following steps: the security inspection machine shoots videos in the luggage security inspection process, the videos are transmitted to the processing equipment, the equipment receives the videos, performs decoding to obtain a plurality of video frames, and performs identification processing on the plurality of video frames, so that whether the luggage contains contraband or not is judged. However, this method requires the device to identify all video frames, which results in a large computational cost.
Disclosure of Invention
The embodiment of the application provides a security inspection method, a security inspection device, a security inspection system and a storage medium, which are used for reducing data processing amount in the security inspection process.
In a first aspect, a security inspection method is provided, including:
acquiring video data in a luggage security check process; the video data comprises YUV data of each frame of video;
extracting Y data in YUV data of a current video frame; wherein the Y data comprises Y components of each pixel unit in the current video frame;
according to a preset mapping relation, sequentially corresponding the Y component of each pixel unit in the current video frame to a coordinate point in a coordinate space to obtain a line packet coordinate area of each line packet in the coordinate space; each row packet coordinate area comprises a coordinate point corresponding to a Y component smaller than a first threshold value;
and acquiring YUV data corresponding to the line packet coordinate area corresponding to each line packet in the current video frame, and performing security inspection on the line packet in the current video frame according to the YUV data corresponding to the line packet coordinate area of each line packet.
In a possible embodiment, according to a preset mapping relationship, sequentially corresponding the Y component of each pixel unit in the current video frame to a coordinate point in a coordinate space, to obtain a row packet coordinate area of each row packet in the coordinate space, including:
according to a preset mapping relation, sequentially corresponding Y components of all pixel units in a current video frame to points in a coordinate space;
determining a minimum coordinate point and a maximum coordinate point of a closed region formed by coordinate points corresponding to the Y components smaller than a first threshold;
and determining a rectangular area surrounded by the minimum coordinate point and the maximum coordinate point as a line-packet coordinate area aiming at the closed area.
In one possible embodiment, determining that the coordinate points corresponding to the Y components smaller than the first threshold constitute the minimum coordinate point and the maximum coordinate point of the closed region includes:
determining each closed region formed by coordinate points corresponding to the Y component smaller than a first threshold value;
traversing each Y component along the direction of a second coordinate axis for each closed area to obtain a maximum coordinate value and a minimum coordinate value of the luggage on a first coordinate axis; and the number of the first and second groups,
traversing each Y component along the direction of a first coordinate axis for each closed area to obtain a maximum coordinate value and a minimum coordinate value of the luggage on the first coordinate axis;
the coordinate space is composed of a first coordinate axis and a second coordinate axis, the maximum coordinate point is composed of the maximum coordinate value which is arranged on the first coordinate axis in a row and the maximum coordinate value which is arranged on the second coordinate axis in a row, and the minimum coordinate point is composed of the minimum coordinate value which is arranged on the first coordinate axis in a row and the minimum coordinate value which is arranged on the second coordinate axis in a row.
In one possible embodiment, the Y data of the coordinate points corresponding to the coordinate space is binarized Y data; wherein the binarized Y data includes a first value for representing a line-packed pixel unit in which a Y component in the Y data is smaller than a first threshold value and a second value for representing a background pixel unit in which the Y component in the Y data is greater than or equal to the first threshold value, and,
and the row packet coordinate area of each row packet comprises a coordinate point corresponding to the first value.
In a possible embodiment, the first threshold is obtained by weighting according to a second threshold corresponding to a video frame previous to the current video frame and the difference; the difference value is the difference between the Y components corresponding to the background pixel units in the two previous video frames of the current video frame.
In a second aspect, a security inspection apparatus is provided, including:
the acquisition module is used for acquiring video data in the process of luggage security inspection; the video data comprises YUV data of each frame of video;
the extracting module is used for extracting Y data in the YUV data of the current video frame; wherein the Y data comprises Y components of each pixel unit in the current video frame;
the mapping module is used for sequentially corresponding the Y component of each pixel unit in the current video frame to a coordinate point in a coordinate space according to a preset mapping relation to obtain a row packet coordinate area of each row packet in the coordinate space; each row packet coordinate area comprises a coordinate point corresponding to a Y component smaller than a first threshold value;
and the security inspection module is used for acquiring YUV data corresponding to the line packet coordinate area corresponding to each line packet in the current video frame and performing security inspection on the line packet in the current video frame according to the YUV data corresponding to the line packet coordinate area of each line packet.
In a possible embodiment, the mapping module is specifically configured to:
according to a preset mapping relation, sequentially corresponding Y components of all pixel units in a current video frame to points in a coordinate space;
determining a minimum coordinate point and a maximum coordinate point of a closed region formed by coordinate points corresponding to the Y components smaller than a first threshold;
and determining a rectangular area surrounded by the minimum coordinate point and the maximum coordinate point as a row-packet coordinate area aiming at the closed area.
In a third aspect, there is provided a security inspection system comprising a security inspection apparatus and a security inspection machine as discussed in the first aspect, wherein:
and the security check machine is used for collecting video data in the luggage security check process and sending the video data to the security check device.
In a fourth aspect, a security inspection apparatus is provided, including:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to implement the method of any one of the first aspects by executing the instructions stored by the memory.
In a fifth aspect, there is provided a computer readable storage medium having stored thereon computer instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects.
Due to the adoption of the technical scheme, the application at least comprises the following beneficial effects:
in the embodiment of the application, before the video data is decoded, the video data can be screened according to the Y component in the video data, and then only YUV data corresponding to the packet image area needs to be decoded, and only the partial area is identified, so that the subsequent decoding data amount and the data amount in the subsequent identification process are reduced, and the image area corresponding to each packet coordinate area can be directly obtained, so that the subsequent content of image segmentation is reduced, the data processing amount is further reduced, the condition of inaccurate identification caused by inaccurate segmentation is avoided, and the accuracy of subsequent identification of packets is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a security inspection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a security inspection method according to an embodiment of the present disclosure;
fig. 3 is a first schematic diagram of traversing a Y component according to an embodiment of the present disclosure;
fig. 4 is a second schematic diagram of a traversal Y component according to an embodiment of the present application;
fig. 5 is a schematic diagram of each determined row packet area according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a security inspection apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a security inspection apparatus according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the drawings and specific embodiments.
In order to reduce the computational overhead of the device in the security inspection process, embodiments of the present Application provide a security inspection method, which may be performed by a security inspection apparatus, where the security inspection apparatus may be implemented by a separate device with a Graphics Processing Unit (GPU), or may be implemented by a controller in a security inspection machine, where the device with the GPU is implemented by a chip or an Application Specific Integrated Circuit (ASIC). The application scenario diagram of the method is explained below.
Referring to fig. 1, the application scenario includes that the application scenario includes a security check machine 110 and a security check device 150 communicatively connected to the security check machine 110. The security inspection machine 110 is provided with a camera 130 at an inlet thereof, a radioactive light scanner 140 is arranged inside the security inspection machine 110, and the security inspection machine 110 further includes a conveyor belt 120. The radiation scanner 140 is, for example, an X-ray scanner.
Before a bag carrier passes through the security inspection machine 110 at security inspection ports of various scenes, such as a security inspection port of a railway station, a security inspection port of an exit or an entrance, and the like, the camera 130 captures a visible light image of the bag carrier currently carrying a luggage. After the bag carrier places the luggage of the bag carrier on the conveyor belt 120 of the security check machine 110, the radioactive light scanner 140 scans the luggage to obtain image data of the luggage. After the security inspection device 150 obtains the image data, the image data corresponding to the line packet in the image data is identified, and then the image data corresponding to the line packet is identified to obtain a security inspection result, so that the processing amount of the security inspection device 150 is reduced because all the image data do not need to be subsequently identified.
It should be noted that fig. 1 illustrates the security device 150 and the security inspection machine 110 as two independent apparatuses, but in reality, the security device 150 may be coupled to the security inspection machine 110. In fig. 2, the security inspection machine 110 includes the camera 130 as an example, and the two may be independent of each other.
Based on the application scenario discussed in fig. 1, a security inspection method related to the embodiment of the present application is described below.
Referring to fig. 2, the method includes:
s210, video data in the process of the luggage security inspection is obtained.
Specifically, the security inspection machine 110 may capture the row packages passing through the conveyor belt 120 by the radioactive light scanner 140 during the security inspection of the row packages, and obtain the video data. Video data refers to metadata of one or more video frames taken. Of course, there are not packets passing through every moment, so a video frame in video data may or may not include one or more packets. The metadata of each video frame can be YUV data, Y represents the brightness of each pixel unit, and the brightness can also be represented as Luminance or Luma, and can also be understood as a gray value; while U and V represent the chromaticity of each pixel unit, the chromaticity may also be represented as Chroma or Chroma. The video data may include a video decoding method, for example, a specific YUV data decoding method defined, in addition to the YUV data of each video frame. The security device 150 may obtain the video data from the security inspection machine 110 through a network interface or a video interface. The video frames captured by the security check machine 110 may be black and white or may be color, and are specifically related to the radioactive scanning parameters in the security check machine 110, and the color of the video frames is not specifically limited in the present application. The pixel unit can be understood as the minimum unit for encoding video, and the size of each pixel unit can be equal to the size of one pixel point or several pixel points.
S220, extracting Y data in the YUV data of the current video frame in the video data.
Wherein the Y data includes Y components for each pixel unit in the current video frame.
After the security inspection device 150 obtains the video data, the video data is converted to obtain YUV data corresponding to the video data, or the format of the video data is the YUV format, so that the security inspection device 150 can obtain YUV data by decoding after obtaining the video data. After obtaining the YUV data, the security inspection apparatus 150 may extract Y data in the YUV data. Of course, the security check machine 110 will send the video data corresponding to each video frame to the security check device 150, and the YUV data processing process for a video frame is taken as an example to describe the security check method related to the embodiment of the present application. For the sake of distinction, the processed video frame is referred to as the current video frame, and it should be noted that any video frame can be regarded as the current video frame when being processed.
In one possible embodiment, the Y component may be binarized and then the binarized Y component may be mapped to the coordinate space.
Specifically, if the Y component of the current video frame is smaller than a first threshold, the Y component is determined to be a first value, for example, 1, and if the Y component of the current video frame is smaller than a second threshold, the Y component is determined to be a second value, for example, 0. And the Y component is subjected to binarization processing, so that the coordinate point corresponding to the Y component smaller than the threshold value can be conveniently and quickly identified in the follow-up process.
And S230, sequentially corresponding the Y component of each pixel unit in the current video frame to the coordinate points in the coordinate space according to a preset mapping relation, and obtaining the row packet coordinate area of each row packet in the coordinate space.
Specifically, the security inspection apparatus 150 may store a mapping relationship in advance, where the mapping relationship is used to indicate a pixel unit of a video frame and a corresponding coordinate region size of the pixel unit in a coordinate space, and for example, the mapping relationship is 1:1, which indicates that a region occupied by each pixel unit is 1 × 1. The security inspection apparatus 150 sequentially associates the Y component corresponding to each pixel unit with each coordinate point in the coordinate space, which is equivalent to obtaining the corresponding position of each Y component in the coordinate space. Here, the coordinate points corresponding to the coordinate space may be the extracted Y data or may be the binarized Y data.
After each Y component is mapped to the coordinate space, since the video frame is obtained by the security inspection machine 110 through penetrating ray scanning, theoretically, a background area in the video frame should be white, and a line packet area should be in other colors, a Y component corresponding to a background pixel unit should be 255, and a Y component corresponding to a line packet pixel unit should be less than 255, so that a closed area formed by coordinate points corresponding to Y components less than a first threshold value can be determined, a minimum coordinate point and a maximum coordinate point corresponding to the closed area are obtained, and a corresponding line packet coordinate area is determined according to the minimum coordinate point and the maximum coordinate point. The minimum coordinate point may be understood as a coordinate point composed of minimum coordinate values of the closed region on each coordinate axis, and the maximum coordinate point may be understood as a coordinate point composed of maximum coordinate values of the closed region on each coordinate axis. For example, when the closed region is a rectangle, the minimum circumscribed coordinate point of the closed region may represent a minimum coordinate point, and the maximum circumscribed coordinate point of the closed region may represent a maximum coordinate point.
Specifically, the closed region may be directly determined as the line packet coordinate region, but since the closed region may be irregular, the irregular closed region may cause that YUV data corresponding to the region cannot be determined subsequently.
Thus, in one possible embodiment, after each enclosed area is determined, S1: traversing each Y component along the direction of a second coordinate axis to obtain the maximum coordinate value and the minimum coordinate value of each closed area on the first coordinate axis;
s2: mapping Y data of the current video frame along a second coordinate axis, traversing each Y component along the direction of the first coordinate axis, and obtaining a maximum coordinate value and a minimum coordinate value of each closed area on the second coordinate axis;
s3: and obtaining the line packet coordinate area according to the maximum coordinate value and the minimum coordinate value of each closed area on the first coordinate axis and the maximum coordinate value and the minimum coordinate value of each closed area on the second coordinate axis.
Specifically, the maximum coordinate value of each closed region on the first coordinate axis and the maximum coordinate value of the closed region on the second coordinate axis form a maximum coordinate point, the minimum coordinate value of each closed region on the first coordinate axis and the minimum coordinate value of the closed region on the second coordinate axis form a minimum coordinate point, a rule region formed by the maximum coordinate point and the minimum coordinate point corresponding to each closed region is determined as a luggage coordinate region of each luggage, and so on, and the luggage coordinate region corresponding to each luggage can be obtained.
As an embodiment, a rectangular region formed by the maximum coordinate point and the minimum coordinate point may be determined as a row packet region, which facilitates subsequent acquisition of YUV data corresponding to each pixel unit in the row packet region.
As an example, the execution sequence of S1 and S2 in the foregoing may be arbitrary, and the application is not particularly limited. When S1 is performed first, only the Y component between the maximum coordinate value and the minimum coordinate value on the first coordinate axis may be traversed when S2 is performed, and the number of traversals of the security device may be relatively reduced.
For example, referring to fig. 3, a mapping diagram of a current video frame is shown, where the region of the current video frame in the coordinate space is shown as 1024 × 1280 in fig. 3, and the x components are traversed along the x direction, so that the minimum coordinate value x1 and the maximum coordinate value x2 on the x axis can be determined. Referring to fig. 4, traversing each y component along the x direction can determine the minimum coordinate value y1 and the maximum coordinate value y2 on the y axis, thereby determining the minimum coordinate point as (x1, y1) and the maximum coordinate point as (x2, y 2).
For example, referring to fig. 5, the maximum coordinate point of the first closed region is determined to be (x11, y11), the maximum coordinate point is (x12, y12), the maximum coordinate point of the second closed region is (x21, y21), the maximum coordinate point is (x22, y22), the maximum coordinate point of the third closed region is (x31, y31), the maximum coordinate point is (x32, y32), and the security inspection device 150 determines the row packet coordinate region corresponding to each closed region according to the minimum coordinate point and the maximum coordinate point of each closed region, such as parcel 1, parcel 2, and parcel 3 shown in fig. 5. The area corresponding to the parcel 3 may be a circular area, or may be a rectangular area corresponding to the circular area.
In one possible embodiment, the Y component threshold corresponding to each video frame is adaptively changed, and the determination of the first threshold for the current frame is described below.
The first threshold is obtained by weighting according to a second threshold corresponding to a previous video frame of the current video frame and the difference value; the difference value is the difference between the Y components corresponding to the background pixel units in the two previous video frames of the current video frame.
Specifically, the intensity of the radioactive ray in the security inspection machine 110 decreases with time, the energy value decreases, and the Y component of the background pixel unit scanned by the security inspection machine 110 is actually smaller than 255, so that the threshold needs to be adjusted to ensure the accuracy of the determined line-packet coordinate region. The formula for specifically determining the first threshold T1 is as follows:
T1=r(T2-(B2-B3)
wherein T1 represents a first threshold, r represents a weighting coefficient, and takes a value less than 1, T2 represents a threshold corresponding to a previous video frame of the current frame, B2 represents a value of a Y component of a background pixel unit in the previous frame of the current frame, and B2 represents a value of a Y component of a background pixel unit in the 2 nd frame before the current frame.
S240, obtaining YUV data corresponding to the line packet coordinate area corresponding to each line packet in the current video frame, and performing security check on the line packet in the current video frame according to the YUV data corresponding to the line packet coordinate area of each line packet.
After each line pack coordinate region of the current video frame is obtained, YUV data corresponding to each line pack coordinate region can be obtained, the YUV data corresponding to each line pack coordinate region is decoded, line pack image regions corresponding to each line pack coordinate region in the current video frame are obtained, and then the line pack image regions are identified, so that the security inspection process of the line pack images is realized.
Specifically, each pixel unit is associated with a corresponding UV component, and after a coordinate region of each line packet is obtained, it is equivalent to obtaining a Y component forming each line packet, so that UV data associated with these Y components can be obtained according to the YUV data in the foregoing, and then YUV data corresponding to the line packet is obtained, and these YUV data are decoded, so as to obtain an image region of the line packet corresponding to the line packet, and the security inspection device 150 only needs to decode the YUV data corresponding to the line packet and perform subsequent identification, which can relatively reduce the processing amount of the security inspection device 150.
After obtaining the image area corresponding to each luggage, the security inspection apparatus 150 may identify whether the luggage contains contraband according to the color, the outline, and the like of the image area of the luggage.
Specifically, the articles of different materials in the security inspection machine 110 may present different colors, and the security inspection device 150 may determine the material of the articles according to the color of the articles, and identify the category of the articles according to the outline presented by each article in the luggage image area.
In the embodiment of the application, before the video data is decoded, the video data can be screened according to the Y component in the video data, and then only the YUV data corresponding to the luggage image area needs to be decoded, and only the partial area needs to be identified, so that the subsequent decoding data amount of the security inspection device 150 and the data amount in the subsequent identification process are reduced, and the image area corresponding to each luggage coordinate area can be directly obtained, therefore, the content of image segmentation performed by the security inspection device 150 subsequently is also reduced, and the accuracy of subsequent identification of luggage is improved.
Based on the same inventive concept, an embodiment of the present application provides a security inspection apparatus, please refer to fig. 6, the apparatus includes:
an obtaining module 601, configured to obtain video data in a process of performing baggage security inspection; the video data comprises YUV data of each frame of video;
an extracting module 602, configured to extract Y data in the YUV data of the current video frame; wherein the Y data comprises Y components of each pixel unit in the current video frame;
the mapping module 603 is configured to sequentially correspond, according to a preset mapping relationship, the Y components of each pixel unit in the current video frame to a coordinate point in a coordinate space, and obtain a row packet coordinate area of each row packet in the coordinate space; each row packet coordinate area comprises a coordinate point corresponding to a Y component smaller than a first threshold value;
and the security inspection module 604 is configured to obtain YUV data corresponding to the line packet coordinate area corresponding to each line packet in the current video frame, and perform security inspection on the line packet in the current video frame according to the YUV data corresponding to the line packet coordinate area of each line packet.
In a possible embodiment, the mapping module 603 is specifically configured to:
according to a preset mapping relation, sequentially corresponding Y components of all pixel units in a current video frame to points in a coordinate space;
determining a minimum coordinate point and a maximum coordinate point of a closed region formed by coordinate points corresponding to the Y components smaller than a first threshold;
and determining a rectangular area surrounded by the minimum coordinate point and the maximum coordinate point as a row-packet coordinate area aiming at the closed area.
In a possible embodiment, the mapping module 603 is specifically configured to:
determining each closed region formed by coordinate points corresponding to the Y component smaller than a first threshold value;
traversing each Y component along the direction of a second coordinate axis for each closed area to obtain a maximum coordinate value and a minimum coordinate value of the luggage on a first coordinate axis; and the number of the first and second groups,
traversing each Y component along the direction of a first coordinate axis for each closed area to obtain a maximum coordinate value and a minimum coordinate value of the luggage on the first coordinate axis;
the coordinate space is composed of a first coordinate axis and a second coordinate axis, the maximum coordinate point is composed of the maximum coordinate value which is arranged on the first coordinate axis in a row and the maximum coordinate value which is arranged on the second coordinate axis in a row, and the minimum coordinate point is composed of the minimum coordinate value which is arranged on the first coordinate axis in a row and the minimum coordinate value which is arranged on the second coordinate axis in a row.
In one possible embodiment, the Y data of the coordinate points corresponding to the coordinate space is binarized Y data; wherein the binarized Y data includes a first value for representing a line-packed pixel unit in which a Y component in the Y data is smaller than a first threshold value and a second value for representing a background pixel unit in which the Y component in the Y data is greater than or equal to the first threshold value, and,
and the row packet coordinate area of each row packet comprises a coordinate point corresponding to the first value.
In a possible embodiment, the first threshold is obtained by weighting a second threshold corresponding to a video frame previous to the current video frame and the difference value; the difference value is the difference between the Y components corresponding to the background pixel units in the two previous video frames of the current video frame.
Based on the same inventive concept, an embodiment of the present application provides a security inspection apparatus, please continue to refer to fig. 1, the apparatus includes: the security check machine 110 and the security check device 150 discussed in fig. 6, wherein:
and the security check machine 110 is configured to collect video data in the process of performing the bag security check, and send the video data to the security check device 150. The process of processing the video data by the security device 150 can refer to the foregoing discussion, and is not described here again.
Based on the same inventive concept, an embodiment of the present application provides a security inspection apparatus, please refer to fig. 7, the apparatus includes:
at least one processor 701, and
a memory 702 communicatively coupled to the at least one processor 701;
wherein the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 implements the security check method as discussed above by executing the instructions stored by the memory 702.
As an example, the processor 701 in fig. 7 may implement the security check method discussed above.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the security check method as discussed above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A security inspection method, comprising:
acquiring video data in a luggage security check process; the video data comprises YUV data of each frame of video;
extracting Y data in YUV data of a current video frame; wherein the Y data comprises Y components of each pixel unit in the current video frame;
according to a preset mapping relation, sequentially corresponding the Y component of each pixel unit in the current video frame to a coordinate point in a coordinate space to obtain a line packet coordinate area of each line packet in the coordinate space; each row packet coordinate area comprises a coordinate point corresponding to a Y component smaller than a first threshold value;
and acquiring YUV data corresponding to the line packet coordinate area corresponding to each line packet in the current video frame, and performing security inspection on the line packet in the current video frame according to the YUV data corresponding to the line packet coordinate area of each line packet.
2. The method of claim 1, wherein the sequentially corresponding the Y component of each pixel unit in the current video frame to the coordinate points in the coordinate space according to the preset mapping relationship to obtain the row packet coordinate area of each row packet in the coordinate space comprises:
according to a preset mapping relation, sequentially corresponding Y components of all pixel units in a current video frame to points in a coordinate space;
determining a minimum coordinate point and a maximum coordinate point of a closed region formed by coordinate points corresponding to the Y components smaller than a first threshold;
and determining a rectangular area surrounded by the minimum coordinate point and the maximum coordinate point as a line-packet coordinate area aiming at the closed area.
3. The method of claim 2, wherein determining that coordinate points corresponding to Y components less than a first threshold constitute minimum and maximum coordinate points of a closed region comprises:
determining each closed region formed by coordinate points corresponding to the Y component smaller than a first threshold value;
traversing each Y component along the direction of a second coordinate axis for each closed area to obtain a maximum coordinate value and a minimum coordinate value of the luggage on a first coordinate axis; and the number of the first and second groups,
traversing each Y component along the direction of a first coordinate axis for each closed area to obtain a maximum coordinate value and a minimum coordinate value of the luggage on the first coordinate axis;
the coordinate space is composed of a first coordinate axis and a second coordinate axis, the maximum coordinate point is composed of the maximum coordinate value which is arranged on the first coordinate axis in a row and the maximum coordinate value which is arranged on the second coordinate axis in a row, and the minimum coordinate point is composed of the minimum coordinate value which is arranged on the first coordinate axis in a row and the minimum coordinate value which is arranged on the second coordinate axis in a row.
4. The method according to any one of claims 1 to 3, wherein Y data of a coordinate point corresponding to the coordinate space is binarized Y data; wherein the binarized Y data includes a first value representing a line-packed pixel unit in which a Y component in the Y data is smaller than a first threshold value, and a second value representing a background pixel unit in which the Y component in the Y data is greater than or equal to the first threshold value, and,
and the row packet coordinate area of each row packet comprises a coordinate point corresponding to the first value.
5. The method according to any one of claims 1 to 3, wherein the first threshold is obtained by weighting a second threshold corresponding to a video frame previous to a current video frame and the difference; the difference value is the difference between the Y components corresponding to the background pixel units in the two previous video frames of the current video frame.
6. A security device, comprising:
the acquisition module is used for acquiring video data in the process of luggage security inspection; the video data comprises YUV data of each frame of video;
the extracting module is used for extracting Y data in the YUV data of the current video frame; wherein the Y data comprises Y components of each pixel unit in the current video frame;
the mapping module is used for sequentially corresponding the Y component of each pixel unit in the current video frame to a coordinate point in a coordinate space according to a preset mapping relation to obtain a row packet coordinate area of each row packet in the coordinate space; each row packet coordinate area comprises a coordinate point corresponding to a Y component smaller than a first threshold value;
and the security inspection module is used for acquiring YUV data corresponding to the line packet coordinate area corresponding to each line packet in the current video frame and performing security inspection on the line packet in the current video frame according to the YUV data corresponding to the line packet coordinate area of each line packet.
7. The apparatus of claim 6, wherein the mapping module is specifically configured to:
according to a preset mapping relation, sequentially corresponding Y components of all pixel units in a current video frame to points in a coordinate space;
determining a minimum coordinate point and a maximum coordinate point of a closed region formed by coordinate points corresponding to the Y components smaller than a first threshold;
and determining a rectangular area surrounded by the minimum coordinate point and the maximum coordinate point as a row-packet coordinate area aiming at the closed area.
8. A security inspection system comprising a security inspection apparatus according to claim 6 or 7 and a security inspection machine, wherein:
and the security check machine is used for collecting video data in the luggage security check process and sending the video data to the security check device.
9. A security device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to perform the method of any one of claims 1 to 5 by executing the instructions stored by the memory.
10. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 5.
CN202010120295.1A 2020-02-26 2020-02-26 Security inspection method, device, system and storage medium Active CN111294594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010120295.1A CN111294594B (en) 2020-02-26 2020-02-26 Security inspection method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010120295.1A CN111294594B (en) 2020-02-26 2020-02-26 Security inspection method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN111294594A CN111294594A (en) 2020-06-16
CN111294594B true CN111294594B (en) 2022-06-03

Family

ID=71030792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010120295.1A Active CN111294594B (en) 2020-02-26 2020-02-26 Security inspection method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN111294594B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069984A (en) * 2020-09-03 2020-12-11 浙江大华技术股份有限公司 Object frame matching display method and device
CN113570543A (en) * 2021-05-27 2021-10-29 浙江大华技术股份有限公司 Security check package identification method, system, storage medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141434A (en) * 1998-02-06 2000-10-31 Christian; Andrew Dean Technique for processing images
CN102831617A (en) * 2012-07-17 2012-12-19 聊城大学 Method and system for detecting and tracking moving object
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141434A (en) * 1998-02-06 2000-10-31 Christian; Andrew Dean Technique for processing images
CN102831617A (en) * 2012-07-17 2012-12-19 聊城大学 Method and system for detecting and tracking moving object
CN106845443A (en) * 2017-02-15 2017-06-13 福建船政交通职业学院 Video flame detecting method based on multi-feature fusion

Also Published As

Publication number Publication date
CN111294594A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
WO2019233264A1 (en) Image processing method, computer readable storage medium, and electronic device
US9262811B2 (en) System and method for spatio temporal video image enhancement
CN111294594B (en) Security inspection method, device, system and storage medium
US8737762B2 (en) Method for enhancing image edge
US20100054542A1 (en) Processing video frames with the same content but with luminance variations across frames
CN108875619A (en) Method for processing video frequency and device, electronic equipment, computer readable storage medium
JP2012038318A (en) Target detection method and device
CN109190617B (en) Image rectangle detection method and device and storage medium
US20210295529A1 (en) Method and system for image processing
WO2021225472A2 (en) Joint objects image signal processing in temporal domain
US11188756B2 (en) Object localization and classification system and method thereof
US9628659B2 (en) Method and apparatus for inspecting an object employing machine vision
CN111160340B (en) Moving object detection method and device, storage medium and terminal equipment
US11551462B2 (en) Document scanning system
US8538142B2 (en) Face-detection processing methods, image processing devices, and articles of manufacture
CN108447107B (en) Method and apparatus for generating video
US20220343092A1 (en) Apparatus and methods for preprocessing images having elements of interest
CN111340677A (en) Video watermark detection method and device, electronic equipment and computer readable medium
CN117994540A (en) Package matching method based on visible light image and X-ray image
CN117037106B (en) FPGA-based video keystone distortion real-time correction system and method
CN114239635B (en) DOI image graffiti processing method, device and equipment
CN113538337B (en) Detection method, detection device and computer readable storage medium
Ha et al. Glare and shadow reduction for desktop digital camera capture systems
CN114143526A (en) Image processing method and device, terminal equipment and readable storage medium
CN115379227A (en) Method for identifying still regions in a frame of a video sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220512

Address after: 310000 4th floor, building 6, No. 1181, Bin'an Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Huashi Zhijian Technology Co.,Ltd.

Address before: Hangzhou City, Zhejiang province Binjiang District 310053 shore road 1187

Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant