CN110648300A - Image data synthesis method, image data synthesis device, computer equipment and storage medium - Google Patents

Image data synthesis method, image data synthesis device, computer equipment and storage medium Download PDF

Info

Publication number
CN110648300A
CN110648300A CN201910848437.3A CN201910848437A CN110648300A CN 110648300 A CN110648300 A CN 110648300A CN 201910848437 A CN201910848437 A CN 201910848437A CN 110648300 A CN110648300 A CN 110648300A
Authority
CN
China
Prior art keywords
image
fused
article
images
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910848437.3A
Other languages
Chinese (zh)
Inventor
黄鼎隆
马修·罗伯特·斯科特
杜竹君
黄丹
胡晓军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yuepu Investment Center LP
Original Assignee
Shenzhen Malong Artificial Intelligence Research Center
Shenzhen Malong Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Malong Artificial Intelligence Research Center, Shenzhen Malong Technologies Co Ltd filed Critical Shenzhen Malong Artificial Intelligence Research Center
Priority to CN201910848437.3A priority Critical patent/CN110648300A/en
Publication of CN110648300A publication Critical patent/CN110648300A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to an image data synthesis method, an image data synthesis device, a computer device and a storage medium, wherein the method comprises the following steps: acquiring an article image of a category to be fused; randomly amplifying the article image to obtain an amplified article image; acquiring a source image, wherein the source image comprises a parcel image, and the parcel image comprises an article image; segmenting the source image according to the article image, and determining a parcel position in the source image; and performing fusion processing by using the parcel position and the augmented object image to generate a fused image. By adopting the method, a large amount of positive sample images for training the AI intelligent model can be obtained.

Description

Image data synthesis method, image data synthesis device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an image data synthesis method, apparatus, computer device, and storage medium.
Background
At present, the security inspection work is highly dependent on manual work, and mainly depends on a security inspector to observe images on a security inspection instrument, so that whether dangerous articles such as a cutter and a gun exist is judged. When the flow of people is large, visual fatigue is easily caused, and the accuracy of identifying dangerous goods is reduced. With the development of Artificial Intelligence technology, AI (Artificial Intelligence) intelligent interpretation is more and more favored by security inspection units. The AI intelligent judging model needs to be trained through massive sample images. The sample image contains the item to be detected. However, in the actual security inspection process, fewer sample images are extracted. For example, in subway security, 100 ten thousand security images may be generated a day, but only a few security images containing tools may be generated.
If the AI intelligent judgment model is trained by using fewer sample images, the accuracy of the AI intelligent judgment model for identifying dangerous goods is low, and the aim of automatically detecting the dangerous goods cannot be fulfilled. Therefore, how to rapidly and effectively acquire a large amount of sample images becomes a technical problem to be solved at present.
Disclosure of Invention
In view of the above, it is necessary to provide an image synthesis method, an apparatus, a computer device, and a storage medium capable of improving the efficiency of sample image acquisition in view of the above technical problems.
An image data synthesis method comprising:
acquiring an article image of a category to be fused;
randomly amplifying the article image to obtain an amplified article image;
acquiring a source image, wherein the source image comprises a package image;
segmenting the source image according to the wrapping image, and determining a wrapping position in the source image;
and performing fusion processing by using the parcel position and the augmented object image to generate a fused image.
In one embodiment, the fusing the augmented article image using the parcel location to generate a fused image includes:
determining a fusion position corresponding to the augmented article image by using the parcel position;
and performing fusion processing according to the fusion position and the augmented article image to generate a fused image.
In one embodiment, the determining the fusion position corresponding to the augmented item image by using the parcel position includes:
acquiring a fusion range in the package position, wherein the fusion range is obtained by counting the labeling information of the source image;
determining the size of the augmented article image according to the category to be fused;
and selecting a range corresponding to the size of the augmented article image in the fusion range, and recording the selected range as a fusion position.
In one embodiment, the method further comprises:
acquiring marking information corresponding to a source image;
extracting multiple types of object images to be fused from the labeling information;
acquiring the number of the images of the article to be fused;
and when the number of the article images meets the number of the article images, splicing the article images.
In one embodiment, the method further comprises:
acquiring the splicing number corresponding to the fused image;
determining a coincidence region between the fused images;
and splicing the corresponding fused images according to the overlapped areas and the splicing number.
In one embodiment, the determining the overlapping region between the fused images includes:
acquiring a splicing coincidence range, wherein the splicing coincidence range is obtained by counting according to the labeling information of a plurality of source images;
and randomly selecting a superposition area between the multiple fused images in the splicing superposition range.
An image data synthesis apparatus, the apparatus comprising:
the acquisition module is used for acquiring an article image of a category to be fused;
the augmentation module is used for randomly augmenting the article image to obtain an augmented article image;
the acquisition module is also used for acquiring a source image, wherein the source image comprises a package image;
the segmentation module is used for segmenting the source image according to the wrapped image and determining the wrapped position in the source image;
and the fusion module is used for carrying out fusion processing on the parcel position and the augmented object image to generate a fused image.
In one embodiment, the fusion module is further configured to determine a fusion location corresponding to the augmented item image using the parcel location; and performing fusion processing according to the fusion position and the augmented article image to generate a fused image.
A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to perform the steps of:
acquiring an article image of a category to be fused;
randomly amplifying the article image to obtain an amplified article image;
acquiring a source image, wherein the source image comprises a package image;
segmenting the source image according to the wrapping image, and determining a wrapping position in the source image;
and performing fusion processing by using the parcel position and the augmented object image to generate a fused image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an article image of a category to be fused;
randomly amplifying the article image to obtain an amplified article image;
acquiring a source image, wherein the source image comprises a package image;
segmenting the source image according to the wrapping image, and determining a wrapping position in the source image;
and performing fusion processing by using the parcel position and the augmented object image to generate a fused image.
According to the image data synthesis method, the image data synthesis device, the computer equipment and the storage medium, the article images of the to-be-fused categories are obtained, and the article images are randomly augmented to obtain a plurality of augmented article images. If there are more article images to be fused, a large number of randomly augmented article images can be obtained. The wrapping position in the source image can be determined by segmenting the source image, and the wrapping position, the augmented article image and the source image are subjected to fusion processing, so that a large number of randomly augmented article images can be respectively fused with the source image to obtain a large number of sample images. Therefore, massive sample images required for training the AI intelligent judging model can be quickly and effectively acquired.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of an application of a method for synthesizing image data;
FIG. 2 is a flow diagram of a method for image data synthesis in one embodiment;
FIG. 3 is a diagram illustrating image fusion of an augmented object according to one embodiment;
FIG. 4 is a schematic structural diagram of an image data synthesizing apparatus according to an embodiment;
FIG. 5 is a schematic diagram of an image data synthesizing apparatus according to another embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image data synthesis method provided in the embodiment of the present invention can be applied to an application environment as shown in fig. 1. The first terminal 104 is connected to the security check machine 102 via a network. The first terminal 104 and the server 106 are connected via a network. The second terminal 108 is connected to the server 106 via a network. The security check machine 102 scans the baggage item to generate a color map and transmits the color map to the first terminal 104. The first terminal 104 uploads the color map to the server 106. The server 106 acquires the color map, the second terminal 108 acquires the color map in the server, and the source image is obtained by labeling the object images of various categories in the color map. The source image contains marking information, and the marking information comprises an article type, an article name and an article position (comprising coordinates). The second terminal 108 acquires the article image of the category to be fused, and randomly amplifies the article image to obtain an amplified article image. The second terminal 108 segments the source image according to the parcel image in the source image, determines the parcel position in the source image, and performs fusion processing on the parcel position, the augmented article image and the source image to generate a fused image.
In one embodiment, as shown in fig. 2, an image data synthesis method is provided, which is described by taking the second terminal in fig. 1 (for simplicity of description, the second terminal is hereinafter simply referred to as the terminal) as an example, and specifically includes:
step 202, acquiring an article image of a category to be fused.
The items may be classified into different categories according to the need for security inspection. The security inspection scenes are different, and the types of the articles to be detected are also different. The item category to be detected, that is, the category to be fused in the present embodiment. The categories to be fused may be recorded in an item category list. For example, the item category list includes a tool category, a container category, a tool category, and the like.
The terminal obtains an article category list corresponding to the security check scene, and extracts categories to be fused from the article category list. The number of the article images corresponding to the category to be fused may be one, or two or more (may be collectively referred to as a plurality of images). If the item image corresponding to the category to be fused is plural, the shape, size, etc. of the item in each item image may be different.
And the terminal acquires a source image according to the category to be fused. The generation mode of the item image of the category to be fused may include various modes. For example, the terminal may segment the article image of the to-be-fused category according to the labeling information of the source image, and extract the corresponding article image. The method can also be used for directly extracting the object image to be fused from the webpage by the terminal. The source images can be further classified into source images (also referred to as first source images) containing articles of the category to be fused and source images (also referred to as second source images) not containing articles of the category to be fused according to the labeling information of the source images. In a source image containing the object image of the category to be fused, segmentation is carried out according to the outline of the object image of the category to be fused through the marking information of the source image, and therefore the object image of the category to be fused is extracted.
And 204, randomly amplifying the article image to obtain an amplified article image.
And when the article image corresponding to the category to be fused can be one, randomly amplifying the article image by the terminal. The object images are subjected to affine transformation, namely the object images are subjected to linear transformation and translation, and a plurality of object images in different shapes can be expanded through the transformation of different parameters of one object image in the expansion process. When the article images corresponding to the to-be-fused categories can be multiple, the terminal respectively carries out random augmentation on each article image, and therefore a large number of augmented article images can be obtained.
And step 206, acquiring a source image, wherein the source image comprises a package image.
And the terminal acquires a source image which does not contain the article to be fused, namely a second source image. A source image may include a plurality of items, including items to be detected and other items. Each item has corresponding label information. The labeling information includes category, name, location, etc. The position may be represented by coordinates or the like. Further, other items may include packages. The package may have one or more items therein. It should be noted that, in order to perform effective image data fusion, that is, to simulate a situation in which a dangerous object is carried in a package in a real security inspection scene, in this embodiment, the second source image is adopted as a basic image to be fused. And fusing the images of the articles to be detected to the corresponding range of the package position through fusion processing, wherein the package of the second source image can have partial articles or no articles.
And step 208, segmenting the source image according to the wrapping image, and determining the wrapping position in the source image.
The method comprises the steps that a terminal obtains a source image which does not contain the object to be fused, image data of images of various objects are marked in the source image, and the image data contain the object images and marking data corresponding to the object images. The source image comprises a parcel image corresponding to the luggage article, statistics is carried out according to image data corresponding to the parcel image and the labeling information of the source image, the position of the parcel image in the source image is determined, and the source image is segmented according to the position of the parcel image.
And step 210, fusing the positions of the packages and the augmented object images to generate fused images.
And the terminal acquires a second source image stored in the server, wherein the second source image is marked with image data of the images of the articles in various categories, and the position of the package in the source image is determined according to statistics of the image data corresponding to the package image and the marking information of the source image. And acquiring the article image amplified in the step 204, and fusing the range of the package position in the source image with the amplified article image. Thereby generating a fused image.
In this embodiment, the article images are randomly augmented by obtaining the article images of the category to be fused, so as to obtain a plurality of augmented article images. If there are more article images to be fused, a large number of randomly augmented article images can be obtained. By segmenting the source image, the parcel position in the source image can be determined, the parcel position, the augmented article image and the source image are fused,
therefore, a large number of randomly amplified object images can be respectively fused with the source images to obtain a large number of sample images. Therefore, massive sample images required for training the AI intelligent judging model can be quickly and effectively acquired.
In one embodiment, the fusing processing is performed by using the parcel position and the augmented article image to generate a fused image, and the fusing processing includes: determining a fusion position corresponding to the augmented article image by using the parcel position; and performing fusion processing according to the fusion position and the augmented article image to generate a fused image.
In this embodiment, statistics is performed according to image data corresponding to the parcel image and the annotation information of the source image, and the position of the parcel in the source image is determined. And counting the image data of the augmented article image, and calculating to obtain the size of the augmented article image. And determining the fusion position corresponding to the augmented article image by using the package position according to the size of the augmented article image. And fusing the augmented object image with the fused position by using a fusion algorithm to generate a fused image.
Further, the fusion processing mode may be various. The augmented article image and/or each category article image to be fused in the category list and the second source image may be referred to as an image to be fused. For the sake of brevity, the augmented article image and/or each category article image to be fused in the category list may be referred to as a first image to be fused, and the second source image may be referred to as a second image to be fused. The terminal obtains a color image (also called as RGB image) corresponding to the first image to be fused and a color image corresponding to the second source image, and converts the color image corresponding to the first image to be fused and the color image corresponding to the second source image into corresponding high-low energy images through the first deep neural network model respectively. The high and low energy images represent penetration information of the target under different energy levels of the X-ray particle beam. If the penetration rate of the a object is x1 and the b object is x2, the penetration rate of the a, b objects when stacked together can be approximated as x1 x 2. Then, the superimposed color image can be estimated according to a general coloring standard based on the value of x1 × x 2. The terminal acquires first high-low energy data corresponding to a first image to be fused, the terminal acquires second high-low energy data corresponding to a second image to be fused, and product calculation is carried out on the first high-low energy data and the second high-low energy data in a high-low energy channel to obtain high-low energy data corresponding to the fused image. And the terminal acquires high-low energy data corresponding to the fusion image, and converts the high-low energy image into a corresponding color image through the conversion of the second deep neural network model. Therefore, the effective fusion of the first image to be fused and the second source image is completed. Thereby obtaining a positive sample image which is consistent with the actual security inspection scene.
The terminal can also convert the first image to be fused in the RGB format to the HSV channel to obtain a first H channel component, a first S channel component and a first V channel component. The terminal may also convert the second image to be fused in the RGB format to the HSV channel to obtain a second H channel component, a second S channel component, and a second V channel component. And the terminal superposes the first H channel component and the second H channel component to obtain a superposed H channel component. And the terminal superposes the first S channel component and the second S channel component to obtain a superposed S channel component. And the terminal superposes the first V channel component and the second V channel component to obtain a superposed V channel component. And the terminal converts the superposed H channel component, the superposed S channel component and the superposed V channel component into HSV color space images and converts the HSV images back into RGB images. Therefore, the effective fusion of the first image to be fused and the second source image is completed. Thereby obtaining a positive sample image which is consistent with the actual security inspection scene.
The terminal can also acquire a first R channel component, a first G channel component and a first B channel component which correspond to the first image to be fused in the RGB channel respectively. The terminal can also acquire a second R channel component, a second G channel component and a second B channel component which correspond to a second image to be fused in the RGB channel respectively. And the terminal superposes the first R channel component and the second R channel component to obtain a superposed R channel component. And the terminal superposes the first G channel component and the second G channel component to obtain a superposed G channel component. And the terminal superposes the first B channel component and the second B channel component to obtain a superposed B channel component. And the terminal converts the superposed R channel component, the superposed G component and the superposed B component into an RGB image. Therefore, the effective fusion of the first image to be fused and the second source image is completed. Thereby obtaining a positive sample image which is consistent with the actual security inspection scene.
The fusion processing mode is utilized to well simulate the situation of superposition of the images of the articles, and the accuracy of recognizing the dangerous articles by training the AI intelligent judgment model is improved.
In one embodiment, determining a fusion position corresponding to the augmented item image using the parcel position comprises: acquiring a fusion range in the package position, wherein the fusion range is obtained by counting the label information of the source image; determining the size of the augmented article image according to the article image of the category to be fused; and selecting a range corresponding to the size of the augmented object image within the fusion range, and recording the selected range as a fusion position.
As shown in fig. 3, in the source image 302, a package image 304 needs to be fused with an augmented item image 306. And randomly selecting a fusion position within the range of the parcel positions. The source image comprises various article images and image data corresponding to the article images, statistics is carried out on the image data corresponding to the package images and the label information of the source image to obtain the position range of the package images in the source image, the position range of the package images is selected as a fusion range, and a fusion position is randomly selected in the fusion range. And counting the image data of the item images to be fused to obtain the size of the item images to be fused, and determining the size of the augmented item images according to the size of the item images to be fused. And randomly selecting a range corresponding to the size of the augmented article image in the fusion range, and recording the range as a fusion position. When only one augmented item image needs to be fused within the parcel range, the positions of the augmented item image within the fusion range may be 306(a), 306(b), 306(c), and 306 (d). There are also many options for the fusion site as long as it does not exceed the fusion range.
Furthermore, a plurality of augmented object images can be fused in the fusion range, and the augmented object images can be object images of the same category or object images of different categories, such as tools and tools. Thus, a large number of diverse sample images can be fused.
In one embodiment, the method further comprises the steps of obtaining the annotation information corresponding to the source image; extracting multiple types of object images to be fused from the labeling information; acquiring the number of the images of the article to be fused; and when the number of the object images meets the number of the object images, splicing the object images.
In the embodiment, a terminal acquires a source image, the source image comprises labeling information, the labeling information is labeled with image data of each category of article image, the outline of the article image is determined according to the image data of each category of article image, the article image is divided according to the outline of the article image, images of various to-be-fused articles are selected from an article category list, the number of the to-be-fused article images is acquired, and when the number of the article images meets the number of the article images, the to-be-fused articles are spliced.
Furthermore, before the number of the object images to be fused is obtained, the number of the object images to be fused is determined, and various strategies are provided for determining the number of the object images to be fused. The terminal can acquire the labeling information from the source image, the labeling information labels the image data of the image of each category of articles, the statistical range of the number of the article images is obtained by counting the image data of the image of each category of articles, and then the number of the articles to be fused is randomly selected in the range for each article image. And the number of the images of the objects to be fused can be fixed by the terminal. After the number of the to-be-fused article images is determined, when the selected article images of the multiple to-be-fused categories meet the number of the to-be-fused article images, splicing the to-be-fused article images. By the method, sample images containing images of different types of articles can be obtained.
In one embodiment, the method further comprises obtaining the number of splices corresponding to the fused image; determining a coincidence region between the fused images; and splicing the corresponding fused images according to the overlapped areas and the splicing number.
In this embodiment, before splicing the fused article images, the overlapping area between the fused article images needs to be determined. The terminal obtains a source image, the source image comprises labeling information, and image data of the object images of all categories are labeled in the labeling information, wherein the image data corresponding to all the package images and the labeling information of the source image are selected for statistics, and the overlapping area between all the package images is obtained. And determining the coincidence area between the fused images according to the coincidence area of each wrapped image in the source image. And acquiring the splicing number corresponding to the fused images, and splicing the corresponding fused images according to the overlapping areas and the splicing number. The splicing comprises transverse splicing, for example, in a real security check scene, after the luggage object is placed on a conveyor belt of a security check machine, the conveyor belt of the security check machine rolls continuously towards one direction, and the scanned images of the luggage object acquired by the security check machine also appear continuously towards one direction, namely, the transverse splicing simulates the condition that the luggage object is stuck when passing the security check. The sample images spliced by the method truly fit the condition of package adhesion in a real security inspection scene.
In one embodiment, determining a region of coincidence between the fused images comprises: acquiring a splicing coincidence range, wherein the splicing coincidence range is obtained by counting according to the labeling information of a plurality of source images; and randomly selecting a superposition area between the multiple fused images in the splicing superposition range.
In the embodiment, a plurality of source images are obtained, the source images include labeling information, and the labeling information is labeled with image data of the images of the articles in each category. The method comprises the steps of selecting image data corresponding to all package images, carrying out statistics according to the image data of all package images and the label information of a source image to obtain the overlapping area of all packages, taking the overlapping area of all packages in the source image as the overlapping range between fused images, randomly selecting the overlapping area between a plurality of fused images in the overlapping range of splicing, obtaining the corresponding splicing number in the fused images, and splicing according to the overlapping area and the fused images corresponding to the splicing number. According to the method, a large number of diverse spliced images can be obtained by splicing the fused images. Therefore, the sample images simulating the adhesion of the packages can be well spliced.
In one embodiment, as shown in fig. 4, there is provided an image data synthesizing apparatus including: an obtaining module 402, an augmenting module 404, a partitioning module 406, and a fusing module 408, wherein:
an obtaining module 402, configured to obtain an image of an article to be fused.
And an augmentation module 404, configured to randomly augment the article image to obtain an augmented article image.
The obtaining module 402 is further configured to obtain a source image, where the source image includes a parcel image.
And a segmentation module 406, configured to segment the source image according to the parcel image, and determine a parcel position in the source image.
And a fusion module 408, configured to perform fusion processing on the item image after the parcel position and the augmentation, so as to generate a fused image.
In one embodiment, the fusion module 408 is further configured to determine a fusion location corresponding to the augmented item image using the parcel location; and performing fusion processing according to the fusion position and the augmented article image to generate a fused image.
In the embodiment, the terminal obtains a source image, image data of each category of article image is marked in the source image, statistics is carried out according to image data corresponding to a package image and marking information of the source image to obtain the position of the package in the source image, the range of the package position is selected as a fusion range, and a position which is adaptive to the size of the augmented article image is randomly selected as a fusion position in the fusion range. And performing fusion processing according to the fusion position and the augmented article image to generate a fused image. Therefore, by randomly selecting the fusion position within the range of the parcel position, the augmented object image can be fused into different positions to generate a large number of different sample images.
In one embodiment, the fusion module 408 is further configured to obtain a fusion range in the package position, where the fusion range is obtained by counting the label information of the source image; determining the size of the augmented article image according to the article image of the category to be fused; and selecting a range corresponding to the size of the augmented object image within the fusion range, and recording the selected range as a fusion position.
In one embodiment, the obtaining module 402 is further configured to obtain labeling information corresponding to the source image to extract multiple types of article images to be fused.
In one embodiment, the obtaining module 402 is further configured to obtain a stitching number corresponding to the fused image.
In yet another embodiment, the obtaining module 402 is further configured to obtain a splicing retuning range.
In one embodiment, as shown in fig. 5, the apparatus further comprises: a stitching module 512, wherein:
and the splicing module 512 is used for splicing the object images to be fused or splicing the fused object images.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image data synthesis method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps in the image data synthesis method provided by the various embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image data synthesis method comprising:
acquiring an article image of a category to be fused;
randomly amplifying the article image to obtain an amplified article image;
acquiring a source image, wherein the source image comprises a package image;
segmenting the source image according to the wrapping image, and determining a wrapping position in the source image;
and performing fusion processing by using the parcel position, the augmented article image and the source image to generate a fused image.
2. The method of claim 1, wherein the generating a fused image by fusing the augmented item image with the parcel location comprises:
determining a fusion position corresponding to the augmented article image by using the parcel position;
and performing fusion processing according to the fusion position and the augmented article image to generate a fused image.
3. The method of claim 2, wherein said determining a fusion location corresponding to the augmented item image using the parcel location comprises:
acquiring a fusion range in the package position, wherein the fusion range is obtained by counting the labeling information of the source image;
determining the size of the augmented article image according to the article image of the category to be fused;
and selecting a range corresponding to the size of the augmented article image in the fusion range, and recording the selected range as a fusion position.
4. The method of claim 1, further comprising:
acquiring marking information corresponding to a source image;
extracting multiple types of object images to be fused from the labeling information;
acquiring the number of the images of the article to be fused;
and when the number of the article images meets the number of the article images, splicing the article images.
5. The method of claim 1, further comprising:
acquiring the splicing number corresponding to the fused image;
determining a coincidence region between the fused images;
and splicing the corresponding fused images according to the overlapped areas and the splicing number.
6. The method of claim 5, wherein the determining the region of coincidence between the fused images comprises:
acquiring a splicing coincidence range, wherein the splicing coincidence range is obtained by counting according to the labeling information of a plurality of source images;
and randomly selecting a superposition area between the multiple fused images in the splicing superposition range.
7. An image data synthesis apparatus, the apparatus comprising:
the acquisition module is used for acquiring an article image of a category to be fused;
the augmentation module is used for randomly augmenting the article image to obtain an augmented article image;
the acquisition module is also used for acquiring a source image, wherein the source image comprises a package image;
the segmentation module is used for segmenting the source image according to the wrapped image and determining the wrapped position in the source image;
and the fusion module is used for carrying out fusion processing on the parcel position and the augmented object image to generate a fused image.
8. The apparatus of claim 7, wherein the fusion module is further configured to determine a fusion location corresponding to the augmented item image using the parcel location; and performing fusion processing according to the fusion position and the augmented article image to generate a fused image.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201910848437.3A 2019-09-09 2019-09-09 Image data synthesis method, image data synthesis device, computer equipment and storage medium Pending CN110648300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910848437.3A CN110648300A (en) 2019-09-09 2019-09-09 Image data synthesis method, image data synthesis device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910848437.3A CN110648300A (en) 2019-09-09 2019-09-09 Image data synthesis method, image data synthesis device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110648300A true CN110648300A (en) 2020-01-03

Family

ID=69010257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910848437.3A Pending CN110648300A (en) 2019-09-09 2019-09-09 Image data synthesis method, image data synthesis device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110648300A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914723A (en) * 2020-07-27 2020-11-10 睿魔智能科技(深圳)有限公司 Data augmentation method and system for improving human body detection rate and human body detection model
CN112001873A (en) * 2020-08-27 2020-11-27 中广核贝谷科技有限公司 Data generation method based on container X-ray image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766829A (en) * 2017-10-27 2018-03-06 浙江大华技术股份有限公司 A kind of method and apparatus of Articles detecting
CN108303748A (en) * 2017-01-12 2018-07-20 同方威视技术股份有限公司 The method for checking equipment and detecting the gun in luggage and articles
CN109948562A (en) * 2019-03-25 2019-06-28 浙江啄云智能科技有限公司 A kind of safe examination system deep learning sample generating method based on radioscopic image
CN109948565A (en) * 2019-03-26 2019-06-28 浙江啄云智能科技有限公司 A kind of not unpacking detection method of the contraband for postal industry
CN110210368A (en) * 2019-05-28 2019-09-06 东北大学 A kind of dangerous material image method for implanting based on safety check image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108303748A (en) * 2017-01-12 2018-07-20 同方威视技术股份有限公司 The method for checking equipment and detecting the gun in luggage and articles
CN107766829A (en) * 2017-10-27 2018-03-06 浙江大华技术股份有限公司 A kind of method and apparatus of Articles detecting
CN109948562A (en) * 2019-03-25 2019-06-28 浙江啄云智能科技有限公司 A kind of safe examination system deep learning sample generating method based on radioscopic image
CN109948565A (en) * 2019-03-26 2019-06-28 浙江啄云智能科技有限公司 A kind of not unpacking detection method of the contraband for postal industry
CN110210368A (en) * 2019-05-28 2019-09-06 东北大学 A kind of dangerous material image method for implanting based on safety check image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周丹: "《保安防范技术》", 31 July 2007 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914723A (en) * 2020-07-27 2020-11-10 睿魔智能科技(深圳)有限公司 Data augmentation method and system for improving human body detection rate and human body detection model
CN112001873A (en) * 2020-08-27 2020-11-27 中广核贝谷科技有限公司 Data generation method based on container X-ray image
CN112001873B (en) * 2020-08-27 2024-05-24 中广核贝谷科技有限公司 Data generation method based on container X-ray image

Similar Documents

Publication Publication Date Title
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
CN111667464B (en) Dangerous goods three-dimensional image detection method and device, computer equipment and storage medium
CN111626123B (en) Video data processing method, device, computer equipment and storage medium
CN112070079B (en) X-ray contraband package detection method and device based on feature map weighting
Yang et al. Learning deep feature correspondence for unsupervised anomaly detection and segmentation
US11182637B2 (en) X-ray image processing system and method, and program therefor
CN111382725B (en) Method, device, equipment and storage medium for processing illegal express packages
Pieringer et al. Flaw detection in aluminium die castings using simultaneous combination of multiple views
CN110648300A (en) Image data synthesis method, image data synthesis device, computer equipment and storage medium
CN113792623B (en) Security check CT target object identification method and device
CN112883926B (en) Identification method and device for form medical images
Straker et al. Instance segmentation of individual tree crowns with YOLOv5: A comparison of approaches using the ForInstance benchmark LiDAR dataset
CN114723724A (en) Dangerous goods identification method, device, equipment and storage medium based on artificial intelligence
Varadarajan et al. Weakly Supervised Object Localization on grocery shelves using simple FCN and Synthetic Dataset
CN114663711B (en) X-ray security inspection scene-oriented dangerous goods detection method and device
Suksangaram et al. The System Operates by Capturing Images of the Wall Surface and Applying Advanced Image Processing Algorithms to Analyze the Visual Data
Butters et al. Measuring apple size distribution from a near top–down image
Aziz et al. Instance segmentation of fire safety equipment using mask R-CNN
CN113312970A (en) Target object identification method, target object identification device, computer equipment and storage medium
CN113359738A (en) Mobile robot path planning method based on deep learning
CN112950568A (en) Scale length calculation method and device, computer equipment and storage medium
CN112256906A (en) Method, device and storage medium for marking annotation on display screen
Varshney et al. Detecting Object Defects for Quality Assurance in Manufacturing
Garcia-Rodriguez Advancements in Computer Vision and Image Processing
Singh Improving Threat Object Recognition for X-Ray Baggage Screening Using Distraction Removal Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220708

Address after: Room 368, 302, 211 Fute North Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: Shanghai Yuepu Investment Center (L.P.)

Address before: 518081 floor 33, Yantian modern industry service center, 3018 Shayan Road, Shatoujiao street, Yantian District, Shenzhen, Guangdong Province

Applicant before: SHENZHEN MALONG TECHNOLOGY Co.,Ltd.

Applicant before: Shenzhen Malong artificial intelligence research center

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200103

RJ01 Rejection of invention patent application after publication