WO2021031626A1 - 图像处理方法、装置、计算机系统以及可读存储介质 - Google Patents

图像处理方法、装置、计算机系统以及可读存储介质 Download PDF

Info

Publication number
WO2021031626A1
WO2021031626A1 PCT/CN2020/089633 CN2020089633W WO2021031626A1 WO 2021031626 A1 WO2021031626 A1 WO 2021031626A1 CN 2020089633 W CN2020089633 W CN 2020089633W WO 2021031626 A1 WO2021031626 A1 WO 2021031626A1
Authority
WO
WIPO (PCT)
Prior art keywords
visible light
suspect
image
light image
scanned
Prior art date
Application number
PCT/CN2020/089633
Other languages
English (en)
French (fr)
Inventor
吴南南
吴凡
马艳芳
彭华
赵世锋
王涛
Original Assignee
同方威视技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 同方威视技术股份有限公司 filed Critical 同方威视技术股份有限公司
Publication of WO2021031626A1 publication Critical patent/WO2021031626A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Definitions

  • the present disclosure relates to an image processing method, an image processing device, a computer system and a computer readable storage medium.
  • security inspection equipment In places with high traffic such as subway stations or railway stations, in order to ensure the safety of personnel and the normal operation of vehicles, security inspection equipment is usually required.
  • the security inspection equipment can detect the packages carried by passengers.
  • passengers place the package on the side of the security check equipment, the security check equipment transports the package to the security check box for X-ray scanning, and then transports it out of the security check box.
  • the staff judges the package by viewing the X-ray scanned image Whether there are suspicious items in.
  • the staff judges that there are suspicious items in the package, they need to open the package at the security checkpoint.
  • the local security staff obtained very little information about the package, they were unable to accurately locate the location of the suspect during the unpacking inspection.
  • the package inspection link is based on the premise that the baggage judged by the remote map judgment cannot be taken away by the passengers.
  • the local security staff has obtained The package information is very small, and the location of the suspect cannot be accurately located when the package is opened for inspection, which leads to problems such as low package opening efficiency.
  • An aspect of the present disclosure provides an image processing method including: acquiring a scanned image marked with a suspect, wherein the scanned image is obtained by scanning an object under inspection by a security inspection device; and according to the scanning start time of the scanned image , Determine the visible light image corresponding to the above-mentioned scanned image, wherein the above-mentioned visible light image is obtained by collecting the above-mentioned inspected object by a visible light image acquisition device; and according to the marked position of the suspect in the above-mentioned scanned image, the information in the above-mentioned visible light image The suspects are marked.
  • Another aspect of the present disclosure also provides an image processing device, including an acquisition module for acquiring a scanned image marked with a suspect, wherein the scanned image is obtained by scanning the inspected object through a security inspection device; confirm The module is used to determine the visible light image corresponding to the scanned image according to the scanning start time of the scanned image, wherein the visible light image is obtained by collecting the inspected object by a visible light image acquisition device; and a marking module for According to the marked position of the suspect in the scanned image, the suspect in the visible light image is marked.
  • an acquisition module for acquiring a scanned image marked with a suspect, wherein the scanned image is obtained by scanning the inspected object through a security inspection device; confirm The module is used to determine the visible light image corresponding to the scanned image according to the scanning start time of the scanned image, wherein the visible light image is obtained by collecting the inspected object by a visible light image acquisition device; and a marking module for According to the marked position of the suspect in the scanned image, the suspect in the visible light
  • Another aspect of the present disclosure provides a computer system, including: one or more processors; a memory for storing one or more programs, wherein when the one or more programs are used by the one or more processors When executed, the foregoing one or more processors are caused to implement the foregoing method.
  • Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions, which are used to implement the above-mentioned method when executed.
  • the computer program includes computer-executable instructions, and the instructions are used to implement the method described above when executed.
  • Fig. 1 schematically shows an application scenario of an image processing method and device according to an embodiment of the present disclosure
  • Fig. 2 schematically shows a schematic diagram of a security inspection device according to another embodiment of the present disclosure
  • FIG. 3 schematically shows a flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 4 schematically shows a schematic diagram of determining a visible light image corresponding to the scanned image according to the scanning start time of the scanned image according to an embodiment of the present disclosure
  • Fig. 5 schematically shows a schematic diagram of an X-ray image scanned under the main viewing angle
  • Fig. 6 schematically shows a schematic diagram of one frame of visible light images taken by a camera
  • Fig. 7 schematically shows a schematic diagram of an X-ray image scanned in a secondary viewing angle
  • FIG. 8 schematically shows a schematic diagram of another frame of visible light image taken by a camera
  • Fig. 9 schematically shows a schematic diagram for characterizing the size of an X-ray image
  • Fig. 10 schematically shows a schematic diagram for characterizing the size of a visible light image
  • Fig. 11 schematically shows another schematic diagram for characterizing the size of a visible light image
  • Fig. 12 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 13 schematically shows a block diagram of a computer system suitable for implementing an image processing method and apparatus according to an embodiment of the present disclosure.
  • At least one of the “systems” shall include but not limited to systems having A alone, B alone, C alone, A and B, A and C, B and C, and/or systems having A, B, C, etc. ).
  • At least one of the “systems” shall include but not limited to systems having A alone, B alone, C alone, A and B, A and C, B and C, and/or systems having A, B, C, etc. ).
  • the embodiments of the present disclosure provide an image processing method, an image processing device, a computer system, and a computer-readable storage medium.
  • the image processing method includes: acquiring a scanned image marked with a suspicious object, where the scanned image is obtained by scanning the inspected object through a security inspection device; determining the visible light image corresponding to the scanned image according to the scanning start time of the scanned image, Among them, the visible light image is obtained by collecting the inspected object by the visible light image acquisition device; and marking the suspect in the visible light image according to the marked position of the suspect in the scanned image.
  • FIG. 1 schematically shows an application scenario of an image processing method and device according to an embodiment of the present disclosure. It should be noted that FIG. 1 is only an example of scenarios where the embodiments of the present disclosure can be applied to help those skilled in the art understand the technical content of the present disclosure, but it does not mean that the embodiments of the present disclosure cannot be applied to other devices. , System, environment or scene.
  • the passenger’s items need to pass through the security inspection device 110 for detection.
  • the security inspection device 110 can perform X-ray scanning detection on the items, and can scan images of the detected items in real time (for example, The X-ray image) is sent to the imaging station 120 via the network 130.
  • the image determination station 120 may include a display, for example, and the display may display X-ray images of the article sent by the security inspection device 110 in real time.
  • the drawing judgment station 120 may be a remote drawing judgment station or a local drawing judgment station.
  • the security inspection device 110 after the security inspection device 110 performs X-ray scanning detection on the article, it can also generate a drawing task, and send the drawing task to the task scheduling center, and the task scheduling center allocates the drawing judgment station.
  • the task scheduling center may include multiple task scheduling centers, and each task scheduling center can communicate with the map judgment station and the security inspection equipment. There can also be multiple stations.
  • the communication architecture of the security inspection equipment, the task scheduling center, and the image judgment station can be designed in a decentralized, intelligent and distributed manner. Based on the communication frame, the image processing method provided by the present disclosure is used to analyze the images in the visible light image. Marking the suspects can realize the collaboration of package opening inspection and remote image judgment, and help local security personnel to quickly and accurately find the package opened for inspection and the suspects in the package.
  • the planner can view the X-ray image of the article through the display, and when a suspect is found, send an inspection opening instruction to the security inspection device 110 and/or the inspection opening station 140. After the security inspection device 110 and/or the inspection opening station 140 receives the inspection opening instruction, the local inspection operator is notified to take out the corresponding items from the security inspection device 110 for unpacking inspection. According to the embodiment of the present disclosure, the suspect in the X-ray image can be marked, and the marked image can be sent to the inspection station 140.
  • a visible light image acquisition device may be provided on the security inspection device 110.
  • the visible light image acquisition equipment includes a camera 111 and/or a camera 112, and the camera 111 and/or the camera 112 may be disposed above the security check box 113 of the security check device 110.
  • the camera 111 and/or the camera 112 may be used to obtain a visible light image of the detected object.
  • the security inspection equipment 110 may send the visible light image acquired by the camera 111 and/or the camera 112 to the inspection station 140.
  • the security check box 113 is provided with an article inlet and an article outlet, the conveying device 114 can penetrate the article inlet and the article outlet, and both ends of the conveying device 114 are exposed outside the security check box 113.
  • the conveyor 114 may be a conveyor belt, for example.
  • an X-ray scanning device may be provided on the inner side of the top of the security inspection box 113, and the X-ray scanning device may perform X-ray scanning on items passing through the security inspection box.
  • the inspection station 140 can match and bind the visible light image of the package and the X-ray image of the package, so as to assist local security personnel to quickly and accurately find the package to be opened for inspection.
  • the inspection station 140 can mark the suspect in the visible light image according to the marked position of the suspect in the X-ray image.
  • the local inspector when the local inspector is performing package opening inspection, he can preliminarily determine the location of the suspect based on the suspects marked on the visible light image of the package, and can also further determine the location of the suspect based on the X-ray image of the package.
  • the suspect box, artificial intelligence automatic recognition results, and the voice prompt of the judges further confirm the location of the suspect, and search for the suspect in the package.
  • the local inspector can record the disposal situation at the inspection station 140 after opening the package for inspection.
  • the disposal results include release, confiscation, and transfer to the police.
  • the type of disposal conclusion can be carried out according to the specific business needs of the customer. custom made.
  • Fig. 2 schematically shows a schematic diagram of a security inspection device according to another embodiment of the present disclosure.
  • the security inspection equipment 200 may include a baffle 210 in addition to a security inspection box.
  • the inner surface of the curved part of the top of the baffle 210 may be provided with a mounting groove, wherein the curved part of the top may refer to the horizontal part of the top.
  • the installation slot can be used to install the camera 220 and the light supplement 230.
  • the supplemental light device 230 may provide light when the imaging device 220 acquires a visible light image.
  • a baffle is installed at the exit of the security inspection device 200 on the side where the passengers are walking, to ensure that the planner has enough time to perform the drawing operation, and to avoid the situation that the drawing judgment conclusion is not obtained when the passenger is removed
  • the condition of the package is designed with a slot for installing the LED fill light and the baggage capture camera.
  • the LED fill light and the baggage capture camera are installed inside, which will not be disturbed by passengers or staff, and can fully guarantee the effect of fill light and photography.
  • the baggage capture camera is used to take pictures of the appearance of the package.
  • the drawing judgment conclusion can be returned to the security inspection equipment, and the system will automatically trigger the sound and light alarm at the security check point at the source of the drawing judgment task. It reminds local security personnel that there is a suspected package that needs to be intercepted and opened.
  • a gantry can be designed on one side of the baffle, and an emergency stop button, a reset button, a belt start-stop button, an indicator light (including a buzzer), and a belt start-stop button are respectively installed on the gantry.
  • the reset button is used when the indicator buzzer alarms. Pressing the reset button will stop the alarm.
  • the indicator light and the buzzer can be an integrated device, and the indicator light has three states: green, red, and yellow. Among them, the indicator light shows green when the security inspection equipment is in a normal working state, and the offline indicator light of the security check point shows yellow. When the X-ray image is judged to be the conclusion of the inspection, the indicator light shows red.
  • the buzzer can alarm in the following two situations: the security check point is offline, and the X-ray image is judged as the conclusion of the inspection by remote judgment. When the buzzer alarms, the local inspector can press the reset button to stop the alarm.
  • the reset button and indicator light are unified and integrated on the gantry, making it easier to use the operations of on-site security personnel.
  • the on-site security personnel can receive the alarm notification at the first time, and can conveniently and quickly press the reset button to stop the sound and light alarm.
  • Fig. 3 schematically shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • the method shown in FIG. 3 may be executed by an electronic device at the inspection station 140 shown in FIG. 1.
  • the present disclosure is not limited to this.
  • the method shown in FIG. 3 can also be directly executed by the security inspection device 110.
  • the security inspection device 110 is provided with a display screen, the visible light image marked with the suspect can also be displayed directly on the electronic device of the security inspection device 110 to indicate The location of the suspect.
  • the method includes operations S310 to S330.
  • a scanned image marked with a suspicious object is obtained, where the scanned image is obtained by scanning the inspected object by the security inspection device.
  • a scanned image marked with a suspect can be obtained from a map determination station.
  • the staff at the drawing station can manually mark the location of the suspect, mark the area where the suspect is located, and then send the scanned image marked with the suspect to the inspection station.
  • it can also be sent to other tasks that need to be sent.
  • Personnel for example, can be sent directly to the electronic device held by the inspector.
  • an automatic image judgment server can use artificial intelligence algorithms to automatically mark suspects.
  • the scanned image may be an X-ray image, for example.
  • a visible light image corresponding to the scanned image is determined according to the scanning start time of the scanned image, where the visible light image is obtained by collecting the inspected object by the visible light image collecting device.
  • each scanned image may correspond to one or more frames of visible light images.
  • the start scanning time of each scanned image plus the transmission time of the item in the security inspection equipment can get the collection time of the first frame of visible light image of the item. According to the collection time of the first frame of visible light image of the item, it can be from a large amount of visible light.
  • the visible light image corresponding to the scanned image is determined in the image.
  • the time when the item is transferred in the security inspection device may be fixed.
  • the visible light image acquisition device may be a visible light camera, for example.
  • the suspect in the visible light image is marked according to the marked position of the suspect in the scanned image.
  • the visible light image marked with the suspect can be displayed on the electronic device of the inspection station to indicate the location of the suspect.
  • FIG. 3 The method shown in FIG. 3 will be further described below with reference to FIGS. 4 to 11 in combination with specific embodiments.
  • determining the visible light image corresponding to the scanned image according to the scanning start time of the scanned image includes: acquiring the transmission time length of the inspected object in the security inspection device; according to the transmission time and scanning time of the inspected object in the security inspection device The time at which the image starts to be scanned determines the collection time of the visible light image; the visible light image corresponding to the scanned image is determined from the images collected by the visible light image collection device according to the collection time of the visible light image.
  • Fig. 4 schematically shows a schematic diagram of determining a visible light image corresponding to the scanned image according to the scanning start time of the scanned image according to an embodiment of the present disclosure.
  • the X-ray machine in the security inspection equipment starts scanning images at time t 0 , and the security inspection equipment can upload X-ray scanned images (hereinafter referred to as X-ray images).
  • the X-ray image information can include the scanning start time. Find the package photo at the time of scanning.
  • the luggage capture camera may be located at the exit of the security inspection machine, such as the camera 112 shown in FIG. 1 or the camera 220 shown in FIG. 2.
  • the front end position a of the camera's shooting range, the position b of the beam exit surface of the X-ray machine, and the belt forward speed v are known.
  • the distance between position a and position b is ⁇ x
  • the scanning start time is t 0
  • the time when the package arrives at position a is t 1
  • the collection time of the visible light image is t 1
  • ⁇ x/v is the transfer time of the baggage in the security inspection equipment
  • security inspection equipment can be classified into two types, single-view and dual-view.
  • the security inspection equipment is of dual-view type, the main and sub-views are respectively, and the luggage is scanned by X-ray machine to obtain two X-ray images.
  • the X-ray image obtained by scanning at the main viewing angle is obtained by scanning at a perpendicular angle to the package, similar to scanning the package directly above the package downward.
  • Fig. 5 schematically shows a schematic diagram of an X-ray image scanned under the main viewing angle.
  • the X-ray image includes the marked position of the suspect, for example, it can be marked in the form of a suspect box, as shown by the dashed box in FIG.
  • Fig. 6 schematically shows a schematic diagram of one frame of visible light images taken by a camera.
  • the suspect in the visible light image can be marked.
  • the visible light image includes the marked position of the suspect, for example, it may be marked in the manner of a suspect box, as shown by the dashed box in FIG.
  • the X-ray image obtained by scanning in the sub-view is obtained by scanning at an angle parallel to the package, similar to scanning the package on the side of the package.
  • Fig. 7 schematically shows a schematic diagram of an X-ray image scanned under a secondary viewing angle.
  • the X-ray image includes the marked position of the suspect, for example, it may be marked in the form of a suspect frame.
  • FIG. 8 schematically shows a schematic diagram of another frame of visible light image taken by a camera.
  • the suspect in the visible light image can be marked.
  • the visible light image includes the marked position of the suspect, for example, it may be marked in the form of a suspect frame.
  • the luggage when the security inspection device is of a single-view type, the luggage is scanned by an X-ray machine to obtain an X-ray image, which may be obtained by scanning the main or sub-view.
  • marking the suspect in the visible light image according to the marked position of the suspect in the scanned image includes: obtaining the pixel mapping relationship between the scanned image and the visible light image; according to the marked position of the suspect in the scanned image, The size information of the scanned image and the pixel mapping relationship determine the mark position of the suspect in the visible light image in the visible light image; mark the suspect in the visible light image according to the mark position of the suspect in the visible light image in the visible light image.
  • the drafter can draw a suspect frame on the suspect in the scanned image from any angle of view.
  • the marking method of the package photo is described in two cases, single and dual angles.
  • the single view scanned image with the suspect frame can be referred to as shown in FIG. 5, and the package photo with the suspect frame is shown in FIG. 6, and the specific marking method is as follows.
  • a remote drafter can draw a suspect frame on an X-ray image, and the system obtains the length and width data of the X-ray image.
  • Fig. 9 schematically shows a schematic diagram for characterizing the size of an X-ray image.
  • the lower left end point of the X-ray image can be taken as the coordinate origin O, and the length (Length) of the X-ray image can be set as L 1 and the width (Width) as W 1 .
  • the position information of the suspect frame is (x 1 , y 1 ), L 2 , W 2 , where L 2 is the length of the suspect frame and W 2 is the width of the suspect frame.
  • FIG. 10 schematically shows a schematic diagram for characterizing the size of the visible light image.
  • the lower left end point of the visible light image can be taken as the coordinate origin O, and the length (Length) of the visible light image can be set as L 3 and the width (Width) as W 3 .
  • the position information of the suspect frame of the package visible light image is (x 2 , y 2 ), L 4 , W 4 , where L 4 is the length of the suspect frame and W 4 is the width of the suspect frame.
  • the X-ray image and the length of the visible light image cannot be calculated in equal proportions. Since the length of the visible light image is fixed, but the length of the object in the scanned image has a certain proportional relationship with the actual scanned package. For example, the larger the object in the scanned image, the larger the object in the actual scanned image. . The length of the scanned image changes dynamically. In order to ensure the accuracy of the suspect frame, the pixel mapping relationship between the scanned image and the visible light image of the package needs to be determined.
  • marking the suspect in the visible light image according to the marked position of the suspect in the scanned image may include the following steps.
  • the pixel mapping relationship K between the scanned image and the wrapped visible light image is determined in advance.
  • the length of the X-ray image corresponding to the entire visible light image can be obtained by calibration, and the length can be measured in pixels.
  • the shooting range is fixed. Assuming that the length of the visible light image of the package is L 3 , you can select a marker that just fills the entire length of the visible light image of the package, and scan and image this marker through an X-ray machine. The X-ray image (scanned image) of this marker can be obtained.
  • the pixel length occupied by the marker on the scanned image is L 0 , that is, the visible light image and the X-ray scanned image are wrapped
  • the pixel mapping relationship K can be determined as L 3 /L 0 .
  • the width range of the X-ray machine scanning object generally does not exceed the belt width range, and the width of the wrapped visible light image is generally greater than the width of the belt conveyor belt, when the installation position of the camera is determined, the width of the wrapped visible light image can be calculated How wide is the distance between the upper and lower edges and the upper and lower edges of the belt conveyor belt, the values are c and d respectively.
  • FIG. 11 schematically shows another schematic diagram for characterizing the size of a visible light image.
  • c represents the interval width between the upper edge of the belt in the visible light image and the upper edge of the visible light image
  • d represents the interval width between the lower edge of the belt and the lower edge of the visible light image in the visible light image.
  • c and d can be further explained with reference to the visible light image shown in FIG. 8.
  • the visible light image the package runs on the belt, the distance between the upper edge of the visible light image and the upper edge of the belt is c, and the distance between the lower edge of the visible light image and the lower edge of the belt is d.
  • the package visible light image corresponding to the X-ray image is divided into multiple visible light images for display.
  • L 0 mapped in the first step can be calculated to be divided into several visible light images, and the value of L 0 /L 1 is taken Then, the rounded value is +1, the multiple visible light images are stitched, and the method of marking the suspect frame on the stitched visible light image of the package is the same as above.
  • the second visible light image can be found according to the value at time t 2.
  • the marked position of the suspect in the visible light image in the visible light image is determined according to the marked position of the suspect in the scanned image, the size information of the scanned image, and the pixel mapping relationship according to the following formula:
  • x 2 (x 1 /L 1 )*L 3 *K
  • y 2 (y 1 /W 1 )*(W 3 -cd)+d
  • L 4 (L 2 /L 1 )*L 3 *K
  • W 4 (W 2 /W 1 )*(W 3 -cd).
  • the coordinates of the suspect frame of the suspect in the scanned image are (x 1 , y 1 ), L 1 is the length of the scanned image, W 1 is the width of the scanned image, and L 2 is the length of the suspect frame of the suspect in the scanned image.
  • W 2 is the width of the suspect frame of the suspect in the scanned image; the coordinates of the suspect frame of the suspect in the visible light image are (x 2 , y 2 ), L 3 is the length of the visible light image, W 3 is the width of the visible light image , L 4 is the length of the suspect frame of the suspect in the visible light image, W 4 is the width of the suspect frame of the suspect in the visible light image; K is the pixel mapping relationship, c represents the upper edge of the belt in the visible light image and the upper edge of the visible light image D represents the interval width between the lower edge of the belt in the visible light image and the lower edge of the visible light image.
  • two scan images as shown in FIG. 5 and FIG. 7 can be obtained.
  • the remote plotter draws the suspect frame in the main perspective, and the marking method for wrapping the visible light image is the same as the above-mentioned single-view marking method.
  • the remote plotter draws the suspect frame in the secondary perspective.
  • the secondary perspective X image with the suspect box is shown in Figure 7, and the visible light image with the suspect box marked is shown in Figure 8.
  • the specific example marking method is as follows.
  • the camera can only take a visible light image of one angle of view, which is equivalent to the image of the main angle of view of the security inspection device, taken from the vertical direction, the visible light image does not show the height of the luggage, so the marking method only calculates the x coordinate data,
  • the calculation method is the same as above, y is the width W 3 of the visible light image. Map the position of the suspect frame (equivalent to the side of the package) on the secondary viewing angle of the X-ray image to a certain area on the visible light image.
  • the inspector can locate the position of contraband through the suspect frame on the visible light image of the package appearance and the suspect frame on the X-ray image when performing the package opening inspection. Furthermore, it can also combine the AI recognition results and the voice prompts of the judges to locate the contraband quickly and accurately, and find the contraband in the package.
  • the centralized image judgment station can display and judge the images in real-time.
  • the centralized image judgment can ensure the real-time synchronization of the image judgment with the security inspection equipment, which is not as good as the local image judgment method.
  • There will be a delay in determining the picture which can be effectively applied to scenes that require high real-time performance such as subway security inspections.
  • the suspect frame is also superimposed on the visible light image of the package appearance, which can improve the efficiency of local security personnel in locating suspicious items when opening the package for inspection, so as to achieve the collaboration of on-site security personnel and remote image judgment.
  • Fig. 12 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • the image processing apparatus 400 includes an acquisition module 410, a determination module 420, and a marking module 430.
  • the acquiring module 410 is used to acquire a scanned image marked with a suspect, where the scanned image is obtained by scanning the inspected object through a security inspection device.
  • the determining module 420 is configured to determine the visible light image corresponding to the scanned image according to the scanning start time of the scanned image, where the visible light image is obtained by collecting the inspected object by the visible light image collecting device.
  • the marking module 430 is used to mark the suspect in the visible light image according to the marked position of the suspect in the scanned image.
  • the image processing device 400 further includes a display module for displaying the visible light image marked with the suspect on the electronic equipment of the inspection station to indicate the location of the suspect.
  • the determination module 420 includes a first acquisition unit, a first determination unit, and a second determination unit.
  • the first acquiring unit is used to acquire the transmission time length of the inspected object in the security inspection device.
  • the first determining unit is configured to determine the collection time of the visible light image according to the transmission time length of the inspected object in the security inspection device and the start time of the scanned image.
  • the second determining unit is configured to determine the visible light image corresponding to the scanned image from the images collected by the visible light image collecting device according to the time when the visible light image is collected.
  • the marking module 430 includes a second acquiring unit, a third determining unit, and a marking unit.
  • the second acquiring unit is used to acquire the pixel mapping relationship between the scanned image and the visible light image.
  • the third determining unit is used to determine the marked position of the suspect in the visible light image in the visible light image according to the marked position of the suspect in the scanned image, the size information of the scanned image, and the pixel mapping relationship.
  • the marking unit is used to mark the suspect in the visible light image according to the marking position of the suspect in the visible light image in the visible light image.
  • the marked position of the suspect in the visible light image in the visible light image is determined according to the marked position of the suspect in the scanned image, the size information of the scanned image, and the pixel mapping relationship according to the following formula:
  • x 2 (x 1 /L 1 )*L 3 *K
  • y 2 (y 1 /W 1 )*(W 3 -cd)+d
  • L 4 (L 2 /L 1 )*L 3 *K
  • W 4 (W 2 /W 1 )*(W 3 -cd).
  • the coordinates of the suspect frame of the suspect in the scanned image are (x 1 , y 1 ), L 1 is the length of the scanned image, W 1 is the width of the scanned image, and L 2 is the length of the suspect frame of the suspect in the scanned image. , W 2 is the width of the suspect frame of the suspect in the scanned image.
  • the coordinates of the suspect frame of the suspect in the visible light image are (x 2 , y 2 ), L 3 is the length of the visible light image, W 3 is the width of the visible light image, L 4 is the length of the suspect frame of the suspect in the visible light image, W 4 is the width of the suspect frame of the suspect in the visible light image.
  • K is the pixel mapping relationship
  • c represents the interval width between the upper edge of the belt in the visible light image and the upper edge of the visible light image
  • d represents the interval width between the lower edge of the belt and the lower edge of the visible light image in the visible light image.
  • the acquisition module 410 is configured to acquire a scanned image of a suspected object from a map determination station, and/or acquire a scanned image of a suspected object from an automatic map server.
  • any number of modules, submodules, units, and subunits, or at least part of the functions of any number of them, may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be split into multiple modules for implementation.
  • any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as a hardware circuit, such as a field programmable gate array (FPGA), a programmable logic array (PLA), System-on-chip, system-on-substrate, system-on-package, application-specific integrated circuit (ASIC), or hardware or firmware in any other reasonable way that integrates or encapsulates the circuit, or can be implemented by software, hardware, and firmware. Any one of these implementations or an appropriate combination of any of them can be implemented.
  • FPGA field programmable gate array
  • PLA programmable logic array
  • ASIC application-specific integrated circuit
  • any one of these implementations or an appropriate combination of any of them can be implemented.
  • one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as a computer program module, and the computer program module may perform corresponding functions when it is executed.
  • any number of the acquiring module 410, the determining module 420, and the marking module 430 may be combined into one module for implementation, or any one of the modules may be split into multiple modules. Or, at least part of the functions of one or more of these modules may be combined with at least part of the functions of other modules and implemented in one module.
  • At least one of the acquisition module 410, the determination module 420, and the marking module 430 may be at least partially implemented as a hardware circuit, such as a field programmable gate array (FPGA), a programmable logic array (PLA), System-on-chip, system-on-substrate, system-on-package, application-specific integrated circuit (ASIC), or can be implemented by hardware or firmware such as any other reasonable way of integrating or packaging the circuit, or by software, hardware, and firmware. Any one of these implementations or an appropriate combination of any of them can be implemented.
  • at least one of the acquiring module 410, the determining module 420, and the marking module 430 may be at least partially implemented as a computer program module, and when the computer program module is run, it may perform a corresponding function.
  • FIG. 13 schematically shows a block diagram of a computer system suitable for implementing an image processing method and apparatus according to an embodiment of the present disclosure.
  • the computer system shown in FIG. 13 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the computer system 500 includes a processor 510 and a computer-readable storage medium 520.
  • the computer system 500 can execute a method according to an embodiment of the present disclosure.
  • the processor 510 may include, for example, a general-purpose microprocessor, an instruction set processor and/or a related chipset and/or a special-purpose microprocessor (for example, an application specific integrated circuit (ASIC)), and so on.
  • the processor 510 may also include on-board memory for caching purposes.
  • the processor 510 may be a single processing unit or multiple processing units for executing different actions of a method flow according to an embodiment of the present disclosure.
  • the computer-readable storage medium 520 may be a non-volatile computer-readable storage medium. Specific examples include but are not limited to: magnetic storage devices, such as magnetic tapes or hard disks (HDD); optical storage devices, such as optical disks (CD-ROM) ; Memory, such as random access memory (RAM) or flash memory; etc.
  • magnetic storage devices such as magnetic tapes or hard disks (HDD)
  • optical storage devices such as optical disks (CD-ROM)
  • Memory such as random access memory (RAM) or flash memory; etc.
  • the computer-readable storage medium 520 may include a computer program 521, and the computer program 521 may include code/computer-executable instructions, which when executed by the processor 510 cause the processor 510 to perform the method according to the embodiment of the present disclosure or any modification thereof.
  • the computer program 521 may be configured to have, for example, computer program code including computer program modules.
  • the code in the computer program 521 may include one or more program modules, such as 521A, module 521B,... It should be noted that the division and number of modules are not fixed. Those skilled in the art can use appropriate program modules or program module combinations according to actual conditions. When these program module combinations are executed by the processor 510, the processor 510 can Perform the method according to the embodiment of the present disclosure or any modification thereof.
  • At least one of the acquiring module 410, the determining module 420, and the marking module 430 may be implemented as a computer program module described with reference to FIG. 13, which, when executed by the processor 510, may implement the corresponding operations described above .
  • the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium may be included in the device/device/system described in the above embodiment; or it may exist alone without being assembled into the device/ In the device/system.
  • the aforementioned computer-readable storage medium carries one or more programs, and when the aforementioned one or more programs are executed, the method according to the embodiments of the present disclosure is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium, for example, may include but not limited to: portable computer disk, hard disk, random access memory (RAM), read-only memory (ROM) , Erasable programmable read-only memory (EPROM or flash memory), portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the above-mentioned module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations, or can be It is realized by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

一种图像处理方法,包括:获取被标记有嫌疑物的扫描图像,其中,扫描图像是通过安检设备对被检查对象进行扫描得到的(S310);根据扫描图像的开始扫描时刻,确定与扫描图像对应的可见光图像,其中,可见光图像是通过可见光图像采集设备对被检查对象进行采集得到的(S320);以及根据扫描图像中嫌疑物的标记位置,对可见光图像中的嫌疑物进行标记(S330)。还提供了一种图像处理装置、一种计算机系统以及一种计算机可读存储介质。

Description

图像处理方法、装置、计算机系统以及可读存储介质 技术领域
本公开涉及一种图像处理方法、一种图像处理装置、一种计算机系统以及一种计算机可读存储介质。
背景技术
地铁站或者火车站等人流量较多的场所,为了保障人员的安全和车辆的正常运行,通常需要设置安检设备,安检设备可以对乘客所携带的包裹进行检测。在安检过程中,乘客将包裹放置于安检设备的一侧,安检设备将包裹输送至安检箱体中进行X光扫描,然后再输送出安检箱体,工作人员通过查看X光扫描图像来判断包裹中是否存在可疑物品。
当工作人员判断包裹中存在可疑物品时,需要在安检点本地进行开包检查。但本地安检工作人员由于获取的包裹信息很少,进行开包检查时无法准确定位嫌疑物的位置。特别是在集中判图系统已大规模使用的背景下,包裹的开检环节是建立在被远程判图判为开检的行李无法被旅客取走的前提下进行的,本地安检工作人员由于获取的包裹信息很少,进行开包检查时无法准确定位嫌疑物的位置,从而导致开包效率低等问题。
发明内容
本公开的一个方面提供了一种图像处理方法包括:获取被标记有嫌疑物的扫描图像,其中,上述扫描图像是通过安检设备对被检查对象进行扫描得到的;根据上述扫描图像的开始扫描时刻,确定与上述扫描图像对应的可见光图像,其中,上述可见光图像是通过可见光图像采集设备对上述被检查对象进行采集得到的;以及根据上述扫描图像中嫌疑物的标记位置,对上述可见光图像中的嫌疑物进行标记。
本公开的另一方面还提供了一种图像处理装置,包括获取模块,用于获取被标记有嫌疑物的扫描图像,其中,上述扫描图像是通过安检设备对被检查对象进行扫描得到的;确定模块,用于根据上述扫描图像的开始扫描时刻,确定与上述 扫描图像对应的可见光图像,其中,上述可见光图像是通过可见光图像采集设备对上述被检查对象进行采集得到的;以及标记模块,用于根据上述扫描图像中嫌疑物的标记位置,对上述可见光图像中的嫌疑物进行标记。
本公开的另一个方面提供了一种计算机系统,包括:一个或多个处理器;存储器,用于存储一个或多个程序,其中,当上述一个或多个程序被上述一个或多个处理器执行时,使得上述一个或多个处理器实现如上所述的方法。
本公开的另一方面提供了一种计算机可读存储介质,存储有计算机可执行指令,上述指令在被执行时用于实现如上所述的方法。
本公开的另一方面提供了一种计算机程序,上述计算机程序包括计算机可执行指令,上述指令在被执行时用于实现如上所述的方法。
附图说明
为了更完整地理解本公开及其优势,现在将参考结合附图的以下描述,其中:
图1示意性示出了根据本公开实施例的图像处理方法及装置的应用场景;
图2示意性示出了根据本公开另一实施例的安检设备的示意图;
图3示意性示出了根据本公开实施例的图像处理方法的流程图;
图4示意性示出了根据本公开实施例的根据扫描图像的开始扫描时刻确定与扫描图像对应的可见光图像的示意图;
图5示意性示出了在主视角下扫描得到的X光图像的示意图;
图6示意性示出了通过摄像头拍摄的其中一帧可见光图像的示意图;
图7示意性示出了在副视角下扫描得到的X光图像的示意图;
图8示意性示出了通过摄像头拍摄的其中另一帧可见光图像的示意图;
图9示意性示出了用于表征X光图像尺寸的示意图;
图10示意性示出了用于表征可见光图像尺寸的示意图;
图11示意性示出了用于表征可见光图像尺寸的另一示意图;
图12示意性示出了根据本公开实施例的图像处理装置的框图;以及
图13示意性示出了根据本公开实施例的适于实现图像处理方法及装置的计算机系统的框图。
具体实施方式
以下,将参照附图来描述本公开的实施例。但是应该理解,这些描述只是示例性的,而并非要限制本公开的范围。在下面的详细描述中,为便于解释,阐述了许多具体的细节以提供对本公开实施例的全面理解。然而,明显地,一个或多个实施例在没有这些具体细节的情况下也可以被实施。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本公开的概念。
在此使用的术语仅仅是为了描述具体实施例,而并非意在限制本公开。在此使用的术语“包括”、“包含”等表明了所述特征、步骤、操作和/或部件的存在,但是并不排除存在或添加一个或多个其他特征、步骤、操作或部件。
在此使用的所有术语(包括技术和科学术语)具有本领域技术人员通常所理解的含义,除非另外定义。应注意,这里使用的术语应解释为具有与本说明书的上下文相一致的含义,而不应以理想化或过于刻板的方式来解释。
在使用类似于“A、B和C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B和C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。在使用类似于“A、B或C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B或C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。
附图中示出了一些方框图和/或流程图。应理解,方框图和/或流程图中的一些方框或其组合可以由计算机程序指令来实现。这些计算机程序指令可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器,从而这些指令在由该处理器执行时可以创建用于实现这些方框图和/或流程图中所说明的功能/操作的装置。本公开的技术可以硬件和/或软件(包括固件、微代码等)的形式来实现。另外,本公开的技术可以采取存储有指令的计算机可读存储介质上的计算机程序产品的形式,该计算机程序产品可供指令执行系统使用或者结合指令执行系统使用。
本公开的实施例提供了一种图像处理方法、一种图像处理装置、一种计算机系统以及一种计算机可读存储介质。该图像处理方法包括:获取被标记有嫌疑物的扫 描图像,其中,扫描图像是通过安检设备对被检查对象进行扫描得到的;根据扫描图像的开始扫描时刻,确定与扫描图像对应的可见光图像,其中,可见光图像是通过可见光图像采集设备对被检查对象进行采集得到的;以及根据扫描图像中嫌疑物的标记位置,对可见光图像中的嫌疑物进行标记。
图1示意性示出了根据本公开实施例的图像处理方法及装置的应用场景。需要注意的是,图1所示仅为可以应用本公开实施例的场景的示例,以帮助本领域技术人员理解本公开的技术内容,但并不意味着本公开实施例不可以用于其他设备、系统、环境或场景。
如图1所示,在该应用场景100中,乘客的物品需要经过安检设备110进行检测,安检设备110可以对物品进行X光扫描检测,并可以实时地将检测得到的物品扫描图像(例如,X光图像)通过网络130发送至判图站120。判图站120例如可以包括显示器,显示器可以实时显示安检设备110发送来的物品X光图像。根据本公开的实施例,判图站120可以是远程判图站,也可以是本地判图站。
根据本公开的实施例,安检设备110可以对物品进行X光扫描检测之后,也可以生成判图任务,将判图任务发送给任务调度中心,由任务调度中心分配判图站。其中,该任务调度中心可以包括多个,每个任务调度中心都可以与判图站和安检设备通信连接。判图站也可以包括多个。根据本公开的实施例,可以采用去中心化智能分布式的方式设计安检设备、任务调度中心和判图站的通信架构,基于该通信框架运用本公开所提供的图像处理方法对可见光图像中的嫌疑物进行标记,可以实现开包检查与远程判图的协作,帮助本地安检人员快速、准确的找到开检包裹以及包裹中的嫌疑物。
根据本公开的实施例,判图员可以通过显示器查看物品的X光图像,并在发现嫌疑物时,发送开检指令至安检设备110和/或开检站140。安检设备110和/或开检站140接收到开检指令后,通知本地开检员从安检设备110上取出相应的物品进行开包检查。根据本公开的实施例,可以对X光图像中的嫌疑物进行标记,将标记后的图像发送给开检站140。
根据本公开的实施例,安检设备110上可以设置可见光图像采集设备。例如,可见光图像采集设备包括摄像装置111和/或摄像装置112,可以将摄像装置111和/或摄像装置112设置于安检设备110的安检箱体113上方。摄像装置111和/ 或摄像装置112可以用于获取被检测对象的可见光图像。
根据本公开的实施例,安检设备110可以将摄像装置111和/或摄像装置112获取到的可见光图像发送给开检站140。
根据本公开的实施例,安检箱体113设置有物品入口和物品出口,传送装置114可以贯穿物品入口和物品出口,传送装置114的两端显露于安检箱体113的外部。传送装置114例如可以是传送带。
安检箱体113顶部内侧例如可以设置有X光扫描装置,X光扫描装置可以对经过安检箱体的物品进行X光扫描。
根据本公开的实施例,开检站140可以将包裹的可见光图像和包裹的X光图像进行匹配绑定,以协助本地安检人员快速、准确的找到待开包检查的包裹。
根据本公开的实施例,开检站140可以根据X光图像中嫌疑物的标记位置对可见光图像中的嫌疑物进行标记。
根据本公开的实施例,本地开检员在进行开包检查时,可以通过包裹的可见光图像上标记的嫌疑物初步判断嫌疑物位置,也可进一步根据包裹的X光图像通过X光图像上的嫌疑框、人工智能自动识别结果、判图员语音提示等进一步确认嫌疑物的位置,对包裹内的嫌疑物进行查找。
根据本公开的实施例,本地开检员开包检查后可在开检站140对处置情况进行记录,处置结果包括放行、没收以及移交警察等,处置结论的类型可以根据客户的具体业务需求进行定制。
图2示意性示出了根据本公开另一实施例的安检设备的示意图。
如图2所示,安检设备200除了包括安检箱体之外,还可以包括挡板210。
挡板210顶部的弯曲部分的内表面上可以设置有安装槽,其中,顶部的弯曲部分可以是指顶部的水平部分。安装槽可以用于安装摄像装置220以及补光装置230。
补光装置230可以在摄像装置220获取可见光图像时提供光线。
根据本公开的实施例,例如,安检设备200出口处位于乘客行走的一侧安装有挡板,用于确保判图员有足够的时间进行判图操作,避免出现乘客取走未得到判图结论包裹的情况。挡板上方内侧设计有安装LED补光灯和行李抓拍摄像头的卡槽,LED补光灯和行李抓拍摄像头安装在内部,不会被乘客或工作人员干扰,能充分保 证补光和拍照的效果。行李抓拍摄像头用于拍摄包裹外观照片。
根据本公开的实施例,在对夹带了违禁品的判图任务做出开检指令后,可以将判图结论回传给安检设备,系统将自动触发判图任务来源安检点的声光报警,提示本地安检人员有嫌疑包裹需要拦截开包。
根据本公开的实施例,挡板的一侧可以设计有龙门架,龙门架上分别安装了急停按钮、复位按钮、皮带启停按钮、指示灯(包含蜂鸣器)、皮带启停按钮。复位按钮在指示灯蜂鸣器报警时使用,按下复位按钮将停止报警。指示灯和蜂鸣器可以是合为一体的装置,指示灯有绿、红、黄三种状态。其中,安检设备处于正常工作状态时指示灯显示绿色,安检点离线指示灯显示黄色,当X光图像被判为开检结论时,指示灯显示红色。蜂鸣器可以在以下两种情况下报警:安检点离线、X光图像被远程判图判为开检结论。当蜂鸣器报警时本地开检员可以按下复位按钮,报警停止。
通过本公开的实施例,在易用性方面,将复位按钮、指示灯统一集成在龙门架上,更易用现场安检人员的操作。有开检包裹触发声光报警后,现场安检人员可第一时间接收到报警通知,并能方便、快速的按下复位按钮,以停止声光报警。
图3示意性示出了根据本公开实施例的图像处理方法的流程图。
需要说明的是,图3所示的方法可以由在图1所示的开检站140的电子设备执行。当然,本公开并不限于此。例如,也可以直接由安检设备110执行图3所示的方法,安检设备110上设置有显示屏时,也可以直接在安检设备110的电子设备上展示被标记有嫌疑物的可见光图像,以指示嫌疑物的位置。
如图3所示,该方法包括操作S310~S330。
在操作S310,获取被标记有嫌疑物的扫描图像,其中,扫描图像是通过安检设备对被检查对象进行扫描得到的。
根据本公开的实施例,可以获取来自判图站的被标记有嫌疑物的扫描图像。在判图站的工作人员可以手动标记嫌疑物的位置,标记出嫌疑物所在的区域,然后将被标记有嫌疑物的扫描图像发送给开检站,当然,也可以发送给其他需要发送的工作人员,例如,可以直接发送给开检员持有的电子设备上。
根据本公开的实施例,也可以获取来自自动判图服务器的被标记有嫌疑物的扫描图像。自动判图服务器可以利用人工智能算法,自动对嫌疑物进行标记。
根据本公开的实施例,扫描图像例如可以是X光图像。
在操作S320,根据扫描图像的开始扫描时刻,确定与扫描图像对应的可见光图像,其中,可见光图像是通过可见光图像采集设备对被检查对象进行采集得到的。
根据本公开的实施例,每一幅扫描图像可以对应于一帧或多帧可见光图像。每一幅扫描图像的开始扫描时刻加上物品在安检设备中传送的时间可以得到该物品的第一帧可见光图像的采集时间,根据该物品的第一帧可见光图像的采集时间可以从大量的可见光图像中确定与扫描图像对应的可见光图像。根据本公开的实施例,物品在安检设备中传送的时间可以是固定的。
根据本公开的实施例,可见光图像采集设备例如可以是可见光摄像机。
在操作S330,根据扫描图像中嫌疑物的标记位置,对可见光图像中的嫌疑物进行标记。
根据本公开的实施例,可以在开检站的电子设备上展示被标记有嫌疑物的可见光图像,以指示嫌疑物的位置。
通过对可见光图像中的嫌疑物进行标记,可以实现开包检查与远程判图的协作,帮助本地工作人员快速、准确的找到开检包裹以及包裹中的嫌疑物品。
下面参考图4~图11,结合具体实施例对图3所示的方法做进一步说明。
根据本公开的实施例,根据扫描图像的开始扫描时刻,确定与扫描图像对应的可见光图像包括:获取被检查对象在安检设备中的传送时长;根据被检查对象在安检设备中的传送时长和扫描图像的开始扫描时刻确定可见光图像的采集时刻;根据可见光图像的采集时刻从可见光图像采集设备采集的图像中确定扫描图像对应的可见光图像。
图4示意性示出了根据本公开实施例的根据扫描图像的开始扫描时刻确定与扫描图像对应的可见光图像的示意图。
如图4所示,安检设备中的X光机在t 0时刻开始扫描图像,安检设备可以上传X光扫描图像(以下简称X光图像)时,X光图像信息中可以包含开始扫描时刻,通过开始扫描时刻找到包裹照片。
根据本公开的实施例,行李抓拍摄像头可以位于安检机出口处,如图1所示的摄像装置112或者如图2所示的摄像装置220。假设摄像头拍摄范围的前端位置a、X光机出束面的位置b、皮带前进速度v是已知的。设位置a和位置b间的距离为Δx, 开始扫描时刻为t 0,包裹到达位置a的时间为t 1,可见光图像的采集时刻为t 1,Δx/v为行李在安检设备中的传送时长,则t 1=t 0+Δx/v。
根据本公开的实施例,安检设备可以分为单视角和双视角两种类型。当安检设备为双视角类型时,分别为主视角和副视角,行李通过X光机扫描得到两张X光图像,
根据本公开的实施例,主视角扫描得到的X光图像是由与包裹垂直角度扫描得到的,类似于在包裹的正上方向下扫描包裹。
图5示意性示出了在主视角下扫描得到的X光图像的示意图。
如图5所示,X光图像中包括嫌疑物的标记位置,例如,可以以嫌疑框的方式标记,如图5中的虚线框所示。
图6示意性示出了通过摄像头拍摄的其中一帧可见光图像的示意图。
根据主视角下扫描得到的X光图像中的嫌疑框,可以对可见光图像中的嫌疑物进行标记。如图6所示,可见光图像中包括嫌疑物的标记位置,例如,可以以嫌疑框的方式标记,如图6中的虚线框所示。
根据本公开的实施例,副视角扫描得到的X光图像是由与包裹平行角度扫描得到的,类似于在包裹的侧面扫描包裹。
图7示意性示出了在副视角下扫描得到的X光图像的示意图。
如图7所示,X光图像中包括嫌疑物的标记位置,例如,可以以嫌疑框的方式标记。
图8示意性示出了通过摄像头拍摄的其中另一帧可见光图像的示意图。
根据副视角下扫描得到的X光图像中的嫌疑框,可以对可见光图像中的嫌疑物进行标记。如图8所示,可见光图像中包括嫌疑物的标记位置,例如,可以以嫌疑框的方式标记。
根据本公开的实施例,当安检设备为单视角类型时,行李通过X光机扫描得到一张X光图像,可以是主视角或副视角扫描得到的。
根据本公开的实施例,根据扫描图像中嫌疑物的标记位置对可见光图像中的嫌疑物进行标记包括:获取扫描图像与可见光图像之间的像素映射关系;根据扫描图像中嫌疑物的标记位置、扫描图像的尺寸信息、像素映射关系确定可见光图 像中的嫌疑物在可见光图像中的标记位置;根据可见光图像中的嫌疑物在可见光图像中的标记位置对可见光图像中的嫌疑物进行标记。
根据本公开的实施例,判图员可以在任意视角对扫描图像中的嫌疑物画嫌疑框,以下对于包裹照片的标记方法分为单、双视角两种情况进行描述。
根据本公开的实施例,在单视角下,画嫌疑框的单视角扫描图像可以参考图5所示,标记了嫌疑框的包裹照片如图6所示,具体标记方法如下。
根据本公开的实施例,远程判图员可以在X光图像上绘制嫌疑框,系统得到X光图像的长(Length)和宽(Width)数据。图9示意性示出了用于表征X光图像尺寸的示意图。如图9所示,可以以X光图像的左下端点为坐标原点O,设X光图像的长(Length)为L 1,宽(Width)为W 1。嫌疑框的位置信息为(x 1,y 1),L 2,W 2,其中,L 2为嫌疑框的长,W 2为嫌疑框的宽。
根据本公开的实施例,摄像头的安装位置确定后,可见光图像的长(Length)和宽(Width)是一定的,图10示意性示出了用于表征可见光图像尺寸的示意图。如图10所示,可以以可见光图像的左下端点为坐标原点O,设可见光图像的长(Length)为L 3,宽(Width)为W 3。计算得出包裹可见光图像的嫌疑框的位置信息为(x 2,y 2),L 4,W 4,其中,L 4为嫌疑框的长,W 4为嫌疑框的宽。
由于摄像头拍摄的一张可见光图像中可能包含多个小包裹,可以通过拍摄时间查找得到包含嫌疑物的照片,可以将嫌疑框标记在第一个包裹上,为了避免嫌疑框标记在多个小包裹之间,不能按X光图像和可见光图像的长度进行等比例计算。由于可见光图像的长度固定,但是扫描图像中物体的长度和实际的被扫描包裹具有一定的比例关系,例如,扫描图像中物体越大,实际的被扫描得到的扫描图像中的该物体也越大。扫描图像的长度是动态变化的,为了保证嫌疑嫌疑框的准确,需要确定扫描图像和包裹可见光图像之间的像素映射关系。
根据本公开的实施例,根据扫描图像中嫌疑物的标记位置对可见光图像中的嫌疑物进行标记可以包括如下步骤。
第一步,预先确定扫描图像和包裹可见光图像之间的像素映射关系K。可以用标定的方式得到整个可见光图像对应的X光图像的长度,该长度可以以像素为单位进行衡量。摄像头位置固定后,拍摄范围就固定了,假设包裹的可见光图像的长度为L 3,可以选择一个标志物,刚好充满整个包裹可见光图像的长度范围,将此标志 物通过X光机进行扫描成像,能够得到此标志物的X光图像(扫描图像),根据此标志物的X光图像能够可以获知此标志物在扫描图像上所占的像素长度为L 0,即包裹可见光图像和X光扫描图像之间的像素映射关系K可以确定为L 3/L 0
第二步,由于X光机扫描物体的宽度范围一般不会超过皮带宽度范围,而包裹可见光图像的宽度一般大于皮带机皮带的宽度,当摄像头的安装位置确定后,可计算出包裹可见光图像的上下边缘与皮带机皮带上下边缘的间距各宽多少,此值分别为c和d。
图11示意性示出了用于表征可见光图像尺寸的另一示意图。如图11所示,即c表示可见光图像中的皮带上边缘与可见光图像的上边缘的间隔宽度,d表示可见光图像中的皮带下边缘与可见光图像的下边缘的间隔宽度。具体地,可以参考图8所示的可见光图像对c和d做进一步说明。如图8所示,在该可见光图像中,包裹在皮带上运行,可见光图像的上边缘与皮带的上边缘间隔宽度为c,可见光图像的下边缘与皮带的下边缘间隔宽度为d。
第三步,当L 3>L 1时,X光图像对应的包裹可见光图像在一张可见光图像中显示,分别计算出(x 2,y 2),L 4,W 4的值,即可在包裹可见光图像上标记出嫌疑框,x 2=(x 1/L 1)*L 3*(L 3/L 0),y 2=(y 1/W 1)*(W 3-c-d)+d,L 4=(L 2/L 1)*L 3*(L 3/L 0),W 4=(W 2/W 1)*(W 3-c-d)。
当L 3<L 1时,X光图像对应的包裹可见光图像分成多张可见光图像显示,使用第一步映射的值L 0可计算得出分成了几张可见光图像,L 0/L 1值取整,然后将取整后的数值+1,将多张可见光图像拼接,在拼接后的包裹可见光图像上标记嫌疑框的方法同上。以包裹可见光图像分成两张可见光图像为例,如图4所示,包裹到达位置a的时刻为t 1,包裹在皮带上向前运行了L 0的距离后的时刻设为t 2,t 2=t 1+L 0/v,则根据时刻t 2的数值即可找到第二张可见光图像。
根据本公开的实施例,按照如下公式根据扫描图像中嫌疑物的标记位置、扫描图像的尺寸信息、像素映射关系确定可见光图像中的嫌疑物在可见光图像中的标记位置:
x 2=(x 1/L 1)*L 3*K,y 2=(y 1/W 1)*(W 3-c-d)+d,
L 4=(L 2/L 1)*L 3*K,W 4=(W 2/W 1)*(W 3-c-d)。
其中,扫描图像中嫌疑物的嫌疑框的坐标为(x 1,y 1),L 1为扫描图像的长,W 1为扫描图像的宽,L 2为扫描图像中嫌疑物的嫌疑框的长,W 2为扫描图像中嫌疑物的嫌疑框的宽;可见光图像中的嫌疑物的嫌疑框的坐标为(x 2,y 2),L 3为可见光图像的长,W 3为可见光图像的宽,L 4为可见光图像中嫌疑物的嫌疑框的长,W 4为可见光图像中嫌疑物的嫌疑框的宽;K为像素映射关系,c表示可见光图像中的皮带上边缘与可见光图像的上边缘的间隔宽度,d表示可见光图像中的皮带下边缘与可见光图像的下边缘的间隔宽度。
根据本公开的实施例,在双视角下,可以得到如图5和如图7所示的两张扫描图。远程判图员在主视角绘制嫌疑框,包裹可见光图像的标记方法同上述单视角下标记的方法。远程判图员在副视角绘制嫌疑框,副视角X图像带嫌疑框如图7所示,标记了嫌疑框的可见光图像如图8所示,具体示例标记方法如下。
根据本公开的实施例,由于摄像头只能拍摄一个视角的可见光图像,相当于安检设备主视角的图像,由垂直方向拍摄,可见光图像并不显示行李的高度,所以标记方法只计算x坐标数据,计算方法同上,y为可见光图像的宽W 3。将X光图像副视角上的嫌疑框(相当于位于包裹的侧面)位置映射到可见光图像上的一定区域。
通过本公开的实施例,在准确性方面,开检员在进行开包检查时,可通过包裹外观可见光图像上的嫌疑框和X光图像上的嫌疑框定位违禁品位置。进一步地,还可以结合AI识别结果、判图员语音提示,快速、准确的定位违禁品位置,对包裹内的违禁品进行查找。
通过本公开的实施例,在安检效率方面,集中判图站可以采用实时的方式进行显示和判图,集中判图可以保证与安检设备进行实时同步的判图,和本地判图方式相比不会产生判图延时,可以有效适用于地铁安检等对实时性要求很高的场景。同时在包裹外观可见光图像上也叠加嫌疑框,可提高本地安检人员开包检查时定位嫌疑物品的效率,从而达到现场安检人员与远程判图的协作作用。
图12示意性示出了根据本公开实施例的图像处理装置的框图。
如图12所示,图像处理装置400包括获取模块410、确定模块420和标记模块430。
获取模块410用于获取被标记有嫌疑物的扫描图像,其中,扫描图像是通过安检设备对被检查对象进行扫描得到的。
确定模块420用于根据扫描图像的开始扫描时刻,确定与扫描图像对应的可见光图像,其中,可见光图像是通过可见光图像采集设备对被检查对象进行采集得到的。
标记模块430用于根据扫描图像中嫌疑物的标记位置,对可见光图像中的嫌疑物进行标记。
通过对可见光图像中的嫌疑物进行标记,可以实现开包检查与远程判图的协作,帮助本地工作人员快速、准确的找到开检包裹以及包裹中的嫌疑物品。
根据本公开的实施例,图像处理装置400还包括展示模块,用于在开检站的电子设备上展示标记有嫌疑物的可见光图像,以指示嫌疑物的位置。
根据本公开的实施例,确定模块420包括第一获取单元、第一确定单元和第二确定单元。
第一获取单元用于获取被检查对象在安检设备中的传送时长。
第一确定单元用于根据被检查对象在安检设备中的传送时长和扫描图像的开始扫描时刻确定可见光图像的采集时刻。
第二确定单元用于根据可见光图像的采集时刻从可见光图像采集设备采集的图像中确定扫描图像对应的可见光图像。
根据本公开的实施例,标记模块430包括第二获取单元、第三确定单元和标记单元。
第二获取单元用于获取扫描图像与可见光图像之间的像素映射关系。
第三确定单元用于根据扫描图像中嫌疑物的标记位置、扫描图像的尺寸信息、像素映射关系确定可见光图像中的嫌疑物在可见光图像中的标记位置。
标记单元,用于根据可见光图像中的嫌疑物在可见光图像中的标记位置对可见光图像中的嫌疑物进行标记。
根据本公开的实施例,按照如下公式根据扫描图像中嫌疑物的标记位置、扫描图像的尺寸信息、像素映射关系确定可见光图像中的嫌疑物在可见光图像中的标记位置:
x 2=(x 1/L 1)*L 3*K,y 2=(y 1/W 1)*(W 3-c-d)+d,
L 4=(L 2/L 1)*L 3*K,W 4=(W 2/W 1)*(W 3-c-d)。
其中,扫描图像中嫌疑物的嫌疑框的坐标为(x 1,y 1),L 1为扫描图像的长,W 1为扫描图像的宽,L 2为扫描图像中嫌疑物的嫌疑框的长,W 2为扫描图像中嫌疑物的嫌疑框的宽。可见光图像中的嫌疑物的嫌疑框的坐标为(x 2,y 2),L 3为可见光图像的长,W 3为可见光图像的宽,L 4为可见光图像中嫌疑物的嫌疑框的长,W 4为可见光图像中嫌疑物的嫌疑框的宽。K为像素映射关系,c表示可见光图像中的皮带上边缘与可见光图像的上边缘的间隔宽度,d表示可见光图像中的皮带下边缘与可见光图像的下边缘的间隔宽度。
根据本公开的实施例,获取模块410用于获取来自判图站的被标记有嫌疑物的扫描图像,以及/或者获取来自自动判图服务器的被标记有嫌疑物的扫描图像。
根据本公开的实施例的模块、子模块、单元、子单元中的任意多个、或其中任意多个的至少部分功能可以在一个模块中实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以被拆分成多个模块来实现。根据本公开实施例的模块、子模块、单元、子单元中的任意一个或多个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式的硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,根据本公开实施例的模块、子模块、单元、子单元中的一个或多个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。
例如,获取模块410、确定模块420和标记模块430中的任意多个可以合并在一个模块中实现,或者其中的任意一个模块可以被拆分成多个模块。或者,这些模块中的一个或多个模块的至少部分功能可以与其他模块的至少部分功能相结合,并在一个模块中实现。根据本公开的实施例,获取模块410、确定模块420和标记模块430中的至少一个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式等硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,获取模块410、确定模块420和标记模块430中 的至少一个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。
图13示意性示出了根据本公开实施例的适于实现图像处理方法及装置的计算机系统的框图。图13示出的计算机系统仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图13所示,计算机系统500包括处理器510和计算机可读存储介质520。该计算机系统500可以执行根据本公开实施例的方法。
具体地,处理器510例如可以包括通用微处理器、指令集处理器和/或相关芯片组和/或专用微处理器(例如,专用集成电路(ASIC)),等等。处理器510还可以包括用于缓存用途的板载存储器。处理器510可以是用于执行根据本公开实施例的方法流程的不同动作的单一处理单元或者是多个处理单元。
计算机可读存储介质520,例如可以是非易失性的计算机可读存储介质,具体示例包括但不限于:磁存储装置,如磁带或硬盘(HDD);光存储装置,如光盘(CD-ROM);存储器,如随机存取存储器(RAM)或闪存;等等。
计算机可读存储介质520可以包括计算机程序521,该计算机程序521可以包括代码/计算机可执行指令,其在由处理器510执行时使得处理器510执行根据本公开实施例的方法或其任何变形。
计算机程序521可被配置为具有例如包括计算机程序模块的计算机程序代码。例如,在示例实施例中,计算机程序521中的代码可以包括一个或多个程序模块,例如包括521A、模块521B、......。应当注意,模块的划分方式和个数并不是固定的,本领域技术人员可以根据实际情况使用合适的程序模块或程序模块组合,当这些程序模块组合被处理器510执行时,使得处理器510可以执行根据本公开实施例的方法或其任何变形。
根据本发明的实施例,获取模块410、确定模块420和标记模块430中的至少一个可以实现为参考图13描述的计算机程序模块,其在被处理器510执行时,可以实现上面描述的相应操作。
本公开还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中描述的设备/装置/系统中所包含的;也可以是单独存在,而未装配入该设备/装置/系统中。上述计算机可读存储介质承载有一个或者多个程序,当上述一个 或者多个程序被执行时,实现根据本公开实施例的方法。
根据本公开的实施例,计算机可读存储介质可以是非易失性的计算机可读存储介质,例如可以包括但不限于:便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
本领域技术人员可以理解,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合和/或结合,即使这样的组合或结合没有明确记载于本公开中。特别地,在不脱离本公开精神和教导的情况下,本公开的各个实施例和/或权利要求中记载的特征可以进行多种组合和/或结合。所有这些组合和/或结合均落入本公开的范围。
尽管已经参照本公开的特定示例性实施例示出并描述了本公开,但是本领域技术人员应该理解,在不背离所附权利要求及其等同物限定的本公开的精神和范围的情况下,可以对本公开进行形式和细节上的多种改变。因此,本公开的范围不应该限于上述实施例,而是应该不仅由所附权利要求来进行确定,还由所附权利要求的等同物来进行限定。

Claims (14)

  1. 一种图像处理方法,包括:
    获取被标记有嫌疑物的扫描图像,其中,所述扫描图像是通过安检设备对被检查对象进行扫描得到的;
    根据所述扫描图像的开始扫描时刻,确定与所述扫描图像对应的可见光图像,其中,所述可见光图像是通过可见光图像采集设备对所述被检查对象进行采集得到的;以及
    根据所述扫描图像中嫌疑物的标记位置,对所述可见光图像中的嫌疑物进行标记。
  2. 根据权利要求1所述的方法,还包括:
    在开检站的电子设备上展示标记有嫌疑物的可见光图像,以指示所述嫌疑物的位置。
  3. 根据权利要求1所述的方法,其中,所述根据所述扫描图像的开始扫描时刻,确定与所述扫描图像对应的可见光图像包括:
    获取所述被检查对象在所述安检设备中的传送时长;
    根据所述被检查对象在所述安检设备中的传送时长和所述扫描图像的开始扫描时刻确定所述可见光图像的采集时刻;以及
    根据所述可见光图像的采集时刻从所述可见光图像采集设备采集的图像中确定所述扫描图像对应的可见光图像。
  4. 根据权利要求1所述的方法,其中,根据所述扫描图像中嫌疑物的标记位置,对所述可见光图像中的嫌疑物进行标记包括:
    获取所述扫描图像与所述可见光图像之间的像素映射关系;
    根据所述扫描图像中嫌疑物的标记位置、所述扫描图像的尺寸信息、所述像素映射关系确定所述可见光图像中的嫌疑物在所述可见光图像中的标记位置;以及
    根据所述可见光图像中的嫌疑物在所述可见光图像中的标记位置对所述可见光图像中的嫌疑物进行标记。
  5. 根据权利要求4所述的方法,其中,按照如下公式根据所述扫描图像中嫌疑物的标记位置、所述扫描图像的尺寸信息、所述像素映射关系确定所述可见光图像中的嫌疑物在所述可见光图像中的标记位置:
    X 2=(x 1/L 1)*L 3*K,y 2=(y 1/W 1)*(W 3-c-d)+d,
    L 4=(L 2/L 1)*L 3*K,W 4=(W 2/W 1)*(W 3-c-d);
    其中,所述扫描图像中嫌疑物的嫌疑框的坐标为(x 1,y 1),L 1为所述扫描图像的长,W 1为所述扫描图像的宽,L 2为所述扫描图像中嫌疑物的嫌疑框的长,W 2为所述扫描图像中嫌疑物的嫌疑框的宽;所述可见光图像中的嫌疑物的嫌疑框的坐标为(x 2,y 2),L 3为所述可见光图像的长,W 3为所述可见光图像的宽,L 4为所述可见光图像中嫌疑物的嫌疑框的长,W 4为所述可见光图像中嫌疑物的嫌疑框的宽;K为所述像素映射关系,所述c表示所述可见光图像中的皮带上边缘与所述可见光图像的上边缘的间隔宽度,所述d表示所述可见光图像中的皮带下边缘与所述可见光图像的下边缘的间隔宽度。
  6. 根据权利要求1所述的方法,其中,所述获取标记有嫌疑物的扫描图像包括:
    获取来自判图站的被标记有嫌疑物的扫描图像;以及/或者
    获取来自自动判图服务器的被标记有嫌疑物的扫描图像。
  7. 一种图像处理装置,包括:
    获取模块,用于获取被标记有嫌疑物的扫描图像,其中,所述扫描图像是通过安检设备对被检查对象进行扫描得到的;
    确定模块,用于根据所述扫描图像的开始扫描时刻,确定与所述扫描图像对应的可见光图像,其中,所述可见光图像是通过可见光图像采集设备对所述被检查对象进行采集得到的;以及
    标记模块,用于根据所述扫描图像中嫌疑物的标记位置,对所述可见光图像中的嫌疑物进行标记。
  8. 根据权利要求7所述的装置,还包括:
    展示模块,用于在开检站的电子设备上展示标记有嫌疑物的可见光图像,以指示所述嫌疑物的位置。
  9. 根据权利要求7所述的装置,其中,所述确定模块包括:
    第一获取单元,用于获取所述被检查对象在所述安检设备中的传送时长;
    第一确定单元,用于根据所述被检查对象在所述安检设备中的传送时长和所述扫描图像的开始扫描时刻确定所述可见光图像的采集时刻;以及
    第二确定单元,用于根据所述可见光图像的采集时刻从所述可见光图像采集设备采集的图像中确定所述扫描图像对应的可见光图像。
  10. 根据权利要求7所述的装置,其中,所述标记模块包括:
    第二获取单元,用于获取所述扫描图像与所述可见光图像之间的像素映射关系;
    第三确定单元,用于根据所述扫描图像中嫌疑物的标记位置、所述扫描图像的尺寸信息、所述像素映射关系确定所述可见光图像中的嫌疑物在所述可见光图像中的标记位置;以及
    标记单元,用于根据所述可见光图像中的嫌疑物在所述可见光图像中的标记位置对所述可见光图像中的嫌疑物进行标记。
  11. 根据权利要求10所述的装置,其中,按照如下公式根据所述扫描图像中嫌疑物的标记位置、所述扫描图像的尺寸信息、所述像素映射关系确定所述可见光图像中的嫌疑物在所述可见光图像中的标记位置:
    X 2=(X 1/L 1)*L 3*K,y 2=(y 1/W 1)*(W 3-c-d)+d,
    L 4=(L 2/L 1)*L 3*K,W 4=(W 2/W 1)*(W 3-c-d);
    其中,所述扫描图像中嫌疑物的嫌疑框的坐标为(x 1,y 1),L 1为所述扫描图像的长,W 1为所述扫描图像的宽,L 2为所述扫描图像中嫌疑物的嫌疑框的长,W 2为所述扫描图像中嫌疑物的嫌疑框的宽;所述可见光图像中的嫌疑物的嫌疑框的坐标为(x 2,y 2),L 3为所述可见光图像的长,W 3为所述可见光图像的宽,L 4为所述可见光图像中嫌疑物的嫌疑框的长,W 4为所述可见光图像中嫌疑物的嫌疑框的宽;K为所述像素映射关系,所述c表示所述可见光图像中的皮带上边缘与所述可见光图像的上边缘的间隔宽度,所述d表示所述可见光图像中的皮带下边缘与所述可见光图像的下边缘的间隔宽度。
  12. 根据权利要求7所述的装置,其中,所述获取模块用于:
    获取来自判图站的被标记有嫌疑物的扫描图像;以及/或者
    获取来自自动判图服务器的被标记有嫌疑物的扫描图像。
  13. 一种计算机系统,包括:
    一个或多个处理器;
    计算机可读存储介质,用于存储一个或多个程序,
    其中,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现权利要求1至6中任一项所述的方法。
  14. 一种计算机可读存储介质,其上存储有可执行指令,该指令被处理器执行时使处理器实现权利要求1至6中任一项所述的方法。
PCT/CN2020/089633 2019-08-19 2020-05-11 图像处理方法、装置、计算机系统以及可读存储介质 WO2021031626A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910767309.6 2019-08-19
CN201910767309.6A CN112396649B (zh) 2019-08-19 2019-08-19 图像处理方法、装置、计算机系统以及可读存储介质

Publications (1)

Publication Number Publication Date
WO2021031626A1 true WO2021031626A1 (zh) 2021-02-25

Family

ID=74603644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/089633 WO2021031626A1 (zh) 2019-08-19 2020-05-11 图像处理方法、装置、计算机系统以及可读存储介质

Country Status (2)

Country Link
CN (1) CN112396649B (zh)
WO (1) WO2021031626A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393429B (zh) * 2021-06-07 2023-03-24 杭州睿影科技有限公司 一种目标检测设备的出口位置的标定方法、目标检测设备
CN117590479A (zh) * 2022-08-08 2024-02-23 同方威视技术股份有限公司 嫌疑物品定位系统和定位方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104849770A (zh) * 2015-06-02 2015-08-19 北京航天易联科技发展有限公司 一种基于被动太赫兹安检成像系统的成像方法
CN108347435A (zh) * 2017-12-25 2018-07-31 王方松 用于公共场所的安全检测设备及其数据采集方法
CN108846823A (zh) * 2018-06-22 2018-11-20 西安天和防务技术股份有限公司 一种太赫兹图像和可见光图像的融合方法
CN110031909A (zh) * 2019-04-18 2019-07-19 西安天和防务技术股份有限公司 安检系统及安检方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109959969B (zh) * 2017-12-26 2021-03-12 同方威视技术股份有限公司 辅助安检方法、装置和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104849770A (zh) * 2015-06-02 2015-08-19 北京航天易联科技发展有限公司 一种基于被动太赫兹安检成像系统的成像方法
CN108347435A (zh) * 2017-12-25 2018-07-31 王方松 用于公共场所的安全检测设备及其数据采集方法
CN108846823A (zh) * 2018-06-22 2018-11-20 西安天和防务技术股份有限公司 一种太赫兹图像和可见光图像的融合方法
CN110031909A (zh) * 2019-04-18 2019-07-19 西安天和防务技术股份有限公司 安检系统及安检方法

Also Published As

Publication number Publication date
CN112396649A (zh) 2021-02-23
CN112396649B (zh) 2024-05-28

Similar Documents

Publication Publication Date Title
US20190095877A1 (en) Image recognition system for rental vehicle damage detection and management
WO2021031626A1 (zh) 图像处理方法、装置、计算机系统以及可读存储介质
US8600116B2 (en) Video speed detection system
CN104091168B (zh) 基于无人机影像的电力线自动提取定位方法
WO2016132587A1 (ja) 情報処理装置、道路構造物管理システム、及び道路構造物管理方法
CN112949577B (zh) 信息关联方法、装置、服务器及存储介质
CN111612020B (zh) 一种异常被检物的定位方法以及安检分析设备、系统
CN108020825A (zh) 激光雷达、激光摄像头、视频摄像头的融合标定系统及方法
JP2017520063A5 (zh)
US10817747B2 (en) Homography through satellite image matching
WO2020258901A1 (zh) 传感器数据处理方法、装置、电子设备及系统
CN114295649B (zh) 一种信息关联方法、装置、电子设备及存储介质
IL236778A (en) Calibrate camera-based surveillance systems
CN107730880A (zh) 一种基于无人飞行器的拥堵监测方法和无人飞行器
US20130135446A1 (en) Street view creating system and method thereof
CN110210338A (zh) 一种对目标人员的着装信息进行检测识别的方法及系统
JP2009140402A (ja) 情報表示装置、情報表示方法、情報表示プログラム及び情報表示プログラムを記録した記録媒体
CN112487894A (zh) 基于人工智能的轨道交通保护区自动巡查方法及装置
CN114898044A (zh) 检测对象成像方法、装置、设备及介质
CN106320173B (zh) 车载无人机桥梁日常安全检测系统及检测方法
Balali et al. Image-based retro-reflectivity measurement of traffic signs in day time
CN115049322B (zh) 一种集装箱堆场的集装箱管理方法及系统
CN105869413A (zh) 基于摄像头视频检测车流量和车速的方法
US20220036107A1 (en) Calculation device, information processing method, and storage medium
CN116311085B (zh) 一种图像处理方法、系统、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20855030

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20855030

Country of ref document: EP

Kind code of ref document: A1