CN115115903A - Sample data generation method and device and storage medium - Google Patents

Sample data generation method and device and storage medium Download PDF

Info

Publication number
CN115115903A
CN115115903A CN202210616969.6A CN202210616969A CN115115903A CN 115115903 A CN115115903 A CN 115115903A CN 202210616969 A CN202210616969 A CN 202210616969A CN 115115903 A CN115115903 A CN 115115903A
Authority
CN
China
Prior art keywords
image
region
interest
target image
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210616969.6A
Other languages
Chinese (zh)
Inventor
熊熹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Guangzhou Technology Co ltd
Original Assignee
Changsha Guangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Guangzhou Technology Co ltd filed Critical Changsha Guangzhou Technology Co ltd
Priority to CN202210616969.6A priority Critical patent/CN115115903A/en
Publication of CN115115903A publication Critical patent/CN115115903A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)

Abstract

The sample data generation method disclosed by the application comprises the following steps of: s1, acquiring an original image, wherein the original image comprises a region of interest and a first background region; s2, extracting the region of interest from the original image; s3, optimizing the region of interest to obtain a first target image; and S4, carrying out image fusion processing on the first target image and a new background image to obtain a second target image, wherein the new background image comprises a second background area, the second background area is different from the first background area, and the second target image is used as sample data for model training. According to the technical scheme, more sample data can be obtained in a limited source data volume, the quality of the sample data is better, and the data obtained after deep learning image model training is more accurate.

Description

Sample data generation method and device and storage medium
Technical Field
The present application relates to the field of generating sample data, and in particular, to a method and an apparatus for generating sample data, and a storage medium.
Background
Neural networks such as deep learning based image classification targets and target detection algorithms need to rely on a large amount of image data to train models. The algorithm itself does not pass through fixed rules, but rather learns the corresponding classification experience and knowledge directly from a large amount of image data. The better the quality of the database used to train the algorithm, the larger the quantity, the more accurate the model prediction results will be.
The data used to train the deep learning image model in most scenes cannot be at least less than two thousand to twenty thousand images for a single class. Limited by the condition of data collection, in many cases, a product developer cannot necessarily collect a large amount of data, which severely limits the application of the deep learning method. Therefore, the method commonly used in the industry is to enhance the data to increase the absolute amount of the data, but the quality of the data enhanced by the traditional method is not high generally, and the improvement of the model training is not great.
The current practice is to increase the data volume by collecting more data under different environments and based on the rotation, inversion, interception, magnification, reduction, white noise increase, color change, contrast and other graphical methods of the real image data. However, the enhanced data obtained by the method and the original data essentially belong to homologous data, new data parameter information cannot be provided for model training, and sufficient effective data cannot be collected in a limited time.
Therefore, how to provide a sample data generation method, which can obtain more sample data within a limited source data volume and has better quality of the sample data, so that the data obtained after deep learning image model training is more accurate, has become a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In order to solve the technical problem, the application provides a sample data generation method, which can obtain more sample data within a limited source data volume, and the quality of the sample data is better, so that the data obtained after deep learning image model training is more accurate.
The technical scheme provided by the application is as follows:
the application provides a sample data generation method, which comprises the following specific steps: s1, acquiring an original image, wherein the original image comprises a region of interest and a first background region; s2, extracting the region of interest from the original image; s3, optimizing the region of interest to obtain a first target image; and S4, carrying out image fusion processing on the first target image and a new background image to obtain a second target image, wherein the new background image comprises a second background area, the second background area is different from the first background area, and the second target image is used as sample data for model training.
Further, in a preferred mode of the present invention, the original image includes original satellite remote sensing image data;
the S2, including:
and performing content identification processing on the original satellite remote sensing image data through a deep learning model to obtain the region of interest.
Further, in a preferred embodiment of the present invention, the obtaining the region of interest by performing content identification processing on the original satellite remote sensing image data through a deep learning model includes:
and identifying the content in the original image through a deep learning model, classifying the content in the original image, determining the region of interest and the position of the region of interest, and extracting the region of interest.
Further, in a preferred mode of the present invention, in S3, the optimization process includes at least one of a transformation process and a graphics state process;
wherein the transformation process comprises one or more of a rotation process, a flipping process, and a scaling process;
the graphics state processing includes graphics enhancement optimization processing for the first target image.
Further, in a preferred mode of the present invention, the rotation process is operated in a manner that: carrying out rotation processing of random angles on the region of interest;
the operation mode of the turning treatment is as follows: mapping the image matrix of the region of interest;
the operation mode of the scaling processing is as follows: and carrying out amplification processing or reduction processing on the image matrix of the region of interest.
Further, in a preferred mode of the present invention, the graphic status processing includes:
s301, simulating the illumination of the region of interest to obtain relighting simulation images at different illumination angles;
s302, aiming at the segmentation shape of the region of interest, constructing a shadow of the simulation image;
and S303, adjusting the position of the shadow by combining the height of the simulated image from the ground and the illumination angle to obtain the second target image.
Further, in a preferred mode of the present invention, the relationship between the position of the shadow area of the simulated image and the illumination angle and the height from the ground to which the simulated image is subjected is expressed as follows:
distance=H*tanθ
the distance represents the position of the simulated image in a shadow area, H represents the height of the simulated image from the ground, and theta represents the illumination angle of the simulated image.
Further, in a preferred mode of the present invention, the S4 includes:
s401, acquiring a plurality of different new background images, and superposing the second target image on the new background images to obtain superposed images;
s402, according to the position of the second target image superimposed on the new background image and the size of the second target image, carrying out recognition and labeling processing on the superimposed image to obtain a third target image.
The present application further provides a sample data generating apparatus, the apparatus includes:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an original image, and the original image comprises an interested area and a first background area;
the extraction module is used for extracting the region of interest from the original image;
the optimization module is used for optimizing the region of interest to obtain a first target image;
and a fusion module, configured to perform image fusion processing on the first target image and a new background image to obtain a second target image, where the new background image includes a second background region, the second background region is different from the first background region, and the second target image is used as sample data for model training.
The present application also provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is used for implementing the sample data generation method.
Compared with the prior art, the sample data generation method provided by the invention comprises the following steps: s1, acquiring an original image, wherein the original image comprises an interested area and a first background area; s2, extracting the region of interest from the original image; s3, optimizing the region of interest to obtain a first target image; and S4, carrying out image fusion processing on the first target image and a new background image to obtain a second target image, wherein the new background image comprises a second background area, the second background area is different from the first background area, and the second target image is used as sample data for model training. According to the method, the interesting region is identified and extracted from the original image, the extracted interesting region is optimized to obtain the first target image, the first target image and the new background image are subjected to image fusion to obtain the second target image which is used as sample data to perform model training, more sample data can be obtained within a limited source data volume by adopting the method, the quality of the sample data is better, and the data obtained after deep learning image model training is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a sample data generating method according to an embodiment of the present invention;
fig. 2 is a diagram illustrating an example of a sample data generating method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a sample data generating apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "fixed" or "disposed" on another element, it can be directly on the other element or be indirectly disposed on the other element; when an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "first," "second," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, are used herein to indicate an orientation or positional relationship, which is based on the orientation or positional relationship shown in the drawings, but merely to facilitate the description and simplify the description, and are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed and operated in a particular orientation, and are not to be construed as limiting the application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "plurality" or "a plurality" means two or more unless specifically limited otherwise.
It should be understood that the structures, ratios, sizes, and the like shown in the drawings are only used for matching the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the practical limit conditions of the present application, so that the modifications of the structures, the changes of the ratio relationships, or the adjustment of the sizes, do not have the technical essence, and the modifications, the changes of the ratio relationships, or the adjustment of the sizes, are all within the scope of the technical contents disclosed in the present application without affecting the efficacy and the achievable purpose of the present application.
Compared with the prior art, the sample data generation method provided by the invention comprises the following steps: s1, acquiring an original image, wherein the original image comprises a region of interest and a first background region; s2, extracting the region of interest from the original image; s3, optimizing the region of interest to obtain a first target image; and S4, carrying out image fusion processing on the first target image and a new background image to obtain a second target image, wherein the new background image comprises a second background area, the second background area is different from the first background area, and the second target image is used as sample data for model training. According to the method, the interesting region is identified and extracted from the original image, the extracted interesting region is optimized to obtain the first target image, the first target image and the new background image are subjected to image fusion to obtain the second target image which is used as sample data to perform model training, more sample data can be obtained within a limited source data volume by adopting the method, the quality of the sample data is better, and the data obtained after deep learning image model training is more accurate.
Specifically, referring to fig. 1 to 2, a sample data generating method provided in an embodiment of the present application includes the following steps: s1, acquiring an original image, wherein the original image comprises a region of interest and a first background region; s2, extracting the region of interest from the original image; s3, optimizing the region of interest to obtain a first target image; and S4, carrying out image fusion processing on the first target image and a new background image to obtain a second target image, wherein the new background image comprises a second background area, the second background area is different from the first background area, and the second target image is used as sample data for model training.
Specifically, in the embodiment of the present invention, the original image includes original satellite remote sensing image data;
the S2, including:
and performing content identification processing on the original satellite remote sensing image data through a deep learning model to obtain the region of interest.
Alternatively, the deep learning model may be existing.
Optionally, the deep learning model may also be obtained based on training of part of original satellite remote sensing image data, specifically, the deep learning model includes: s201, performing content identification processing and data annotation processing on part of original satellite remote sensing image data to obtain an image data set; s102, predicting the number of other residual original satellite remote sensing images through a deep learning model to obtain the region of interest, wherein the deep learning model is obtained through training based on the image data set.
Specifically, in the embodiment of the present invention, the obtaining the region of interest by performing content identification processing on the original satellite remote sensing image data through a deep learning model includes:
and identifying the content in the original image through a deep learning model, classifying the content in the original image, determining the region of interest and the position of the region of interest, and extracting the region of interest.
Specifically, in the embodiment of the present invention, in the S3, the optimization process includes at least one of a transformation process and a graphics state process;
wherein the transformation process comprises one or more of a rotation process, a flipping process, and a scaling process;
the graphics state processing includes graphics enhancement optimization processing for the first target image.
Specifically, in the embodiment of the present invention, the operation mode of the rotation processing is as follows: carrying out rotation processing of random angles on the region of interest;
and (3) rotating the region of interest by any random angle of 0-360 degrees, and if the sample data of the region of interest is an image matrix A, rotating the image by using the expression of complex polar coordinates as follows:
A*exp(nπi)
wherein n is any real number between [0,2), and the complex number expression form is realized by using a corresponding real number form in actual processing;
the operation mode of the turning treatment is as follows: mapping the image matrix of the region of interest; mapping the image matrix A [ i, j ] of m × n into A [ m-i, j ] or A [ i, n-j ];
the operation mode of the scaling processing is as follows: and carrying out amplification processing or reduction processing on the image matrix of the region of interest. And (3) carrying out amplification processing or reduction processing on the image matrix A according to the coefficient of a, wherein the value range of a can be (0.5, 2).
Specifically, in the embodiment of the present invention, the graphics state processing includes:
s301, simulating the illumination of the region of interest to obtain relighting simulation images at different illumination angles;
s302, aiming at the segmentation shape of the region of interest, constructing a shadow of the simulation image;
and S303, adjusting the position of the shadow by combining the height of the simulated image from the ground and the illumination angle to obtain the second target image.
Specifically, in the embodiment of the present invention, the relationship between the position of the shadow area of the simulated image and the illumination angle and the height from the ground received by the simulated image is represented as:
distance=H*tanθ
the distance represents the position of the simulated image in a shadow area, H represents the height of the simulated image from the ground, and theta represents the illumination angle of the simulated image.
It should be further noted that, in the embodiment of the present invention, the graphic state processing method of the region of interest in the visible light, radar, infrared, and other scenes is similar.
Specifically, in the embodiment of the present invention, the S4 includes:
s401, acquiring a plurality of different new background images, and superposing the second target image on the new background images to obtain superposed images;
s402, according to the position of the second target image superimposed on the new background image and the size of the second target image, carrying out recognition and labeling processing on the superimposed image to obtain a third target image.
It should be further noted that, in the embodiment of the present invention, the new background image includes different backgrounds such as ocean, sky, land, forest, desert, and the like.
As described above, in the sample data generating method according to the embodiment of the present invention, by identifying and extracting the region of interest from the original image, optimizing the extracted region of interest, simulating the optical signal influence of illumination, shadow and radar reflection change at different angles on the region of interest by constructing a three-dimensional model of an object in the optimizing process to obtain the graphic states of the region of interest in different scenes to obtain a first target image, carrying out image fusion processing on the first target image and a new background image to obtain a second target image which is used as sample data to carry out model training, and obtaining more sample data within limited source data volume by adopting the method, and the quality of the sample data is better, so that the data obtained after deep learning image model training is more accurate.
Fig. 3 is a schematic diagram of a sample data generating apparatus according to an embodiment of the present invention, and as shown in fig. 3, the apparatus includes:
an obtaining module 100, configured to obtain an original image, where the original image includes a region of interest and a first background region;
an extraction module 200, configured to extract the region of interest from the original image;
the optimization module 300 is configured to perform optimization processing on the region of interest to obtain a first target image;
a fusion module 400, configured to perform image fusion processing on the first target image and a new background image to obtain a second target image, where the new background image includes a second background region, and the second background region is different from the first background region, and the second target image is used as sample data for model training.
In some embodiments, a computer device is provided.
Fig. 4 is a schematic structural diagram of a computer device provided in an embodiment of the present application, and as shown in fig. 4, the computer device includes: a processor 11 and a memory 12.
The memory 12 is used for storing programs and data, and the processor 11 calls the programs stored in the memory and executes the programs to implement the steps of the method embodiments of the present application.
In the above computer devices, the memory and the processor are electrically connected directly or indirectly to enable data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines, such as a bus. The memory stores computer-executable instructions for implementing the data access control method, and includes at least one software functional module which can be stored in the memory in the form of software or firmware, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like. The memory is used for storing programs, and the processor executes the programs after receiving the execution instructions. Further, the software programs and modules within the aforementioned memories may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In some embodiments, a computer-readable storage medium having stored thereon computer-executable instructions for performing the steps of the method embodiments of the present application when executed by a processor is provided.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of the method embodiments of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A sample data generating method, characterized in that the method comprises the following steps:
s1, acquiring an original image, wherein the original image comprises a region of interest and a first background region;
s2, extracting the region of interest from the original image;
s3, optimizing the region of interest to obtain a first target image;
and S4, carrying out image fusion processing on the first target image and a new background image to obtain a second target image, wherein the new background image comprises a second background area, the second background area is different from the first background area, and the second target image is used as sample data for model training.
2. The method of claim 1, wherein the raw image comprises raw satellite telemetry image data;
the S2, including:
and performing content identification processing on the original satellite remote sensing image data through a deep learning model to obtain the region of interest.
3. The method according to claim 2, wherein the obtaining of the region of interest by performing content recognition processing on the original satellite remote sensing image data through a deep learning model comprises:
and identifying the content in the original image through a deep learning model, classifying the content in the original image, determining the region of interest and the position of the region of interest, and extracting the region of interest.
4. The method according to claim 1, wherein in the S3, the optimization process includes at least one of a transformation process and a graphics state process;
wherein the transformation process comprises one or more of a rotation process, a flipping process, and a scaling process;
the graphics state processing includes graphics enhancement optimization processing for the first target image.
5. The method of claim 4, wherein the rotation process operates by: carrying out rotation processing of random angles on the region of interest;
the operation mode of the turning treatment is as follows: mapping the image matrix of the region of interest;
the operation mode of the scaling processing is as follows: and carrying out amplification processing or reduction processing on the image matrix of the region of interest.
6. The method of claim 4, wherein the graphical state processing comprises:
s301, simulating the illumination of the region of interest to obtain relighting simulation images at different illumination angles;
s302, aiming at the segmentation shape of the region of interest, constructing a shadow of the simulation image;
and S303, adjusting the position of the shadow by combining the height of the simulated image from the ground and the illumination angle to obtain the second target image.
7. The method of claim 6, wherein the relationship between the position of the shadow area of the simulated image and the illumination angle and the height from the ground to which the simulated image is subjected is represented as follows:
distance=H*tanθ
the distance represents the position of the simulated image in a shadow area, H represents the height of the simulated image from the ground, and theta represents the illumination angle of the simulated image.
8. The method according to claim 1, wherein the S4 includes:
s401, acquiring a plurality of different new background images, and superposing the second target image on the new background images to obtain superposed images;
s402, according to the position of the second target image superimposed on the new background image and the size of the second target image, identifying and labeling the superimposed image to obtain a third target image.
9. An apparatus for generating sample data, the apparatus comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an original image, and the original image comprises an interested area and a first background area;
the extraction module is used for extracting the region of interest from the original image;
the optimization module is used for optimizing the region of interest to obtain a first target image;
and a fusion module, configured to perform image fusion processing on the first target image and a new background image to obtain a second target image, where the new background image includes a second background region, the second background region is different from the first background region, and the second target image is used as sample data for model training.
10. A computer-readable storage medium having stored therein computer-executable instructions for implementing the sample data generation method of any one of claims 1-8 when executed by a processor.
CN202210616969.6A 2022-06-01 2022-06-01 Sample data generation method and device and storage medium Pending CN115115903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210616969.6A CN115115903A (en) 2022-06-01 2022-06-01 Sample data generation method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210616969.6A CN115115903A (en) 2022-06-01 2022-06-01 Sample data generation method and device and storage medium

Publications (1)

Publication Number Publication Date
CN115115903A true CN115115903A (en) 2022-09-27

Family

ID=83326329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210616969.6A Pending CN115115903A (en) 2022-06-01 2022-06-01 Sample data generation method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115115903A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557466A (en) * 2024-01-11 2024-02-13 中国科学院空天信息创新研究院 Optical remote sensing image target image enhancement method and device based on imaging conditions

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557466A (en) * 2024-01-11 2024-02-13 中国科学院空天信息创新研究院 Optical remote sensing image target image enhancement method and device based on imaging conditions
CN117557466B (en) * 2024-01-11 2024-04-09 中国科学院空天信息创新研究院 Optical remote sensing image target image enhancement method and device based on imaging conditions

Similar Documents

Publication Publication Date Title
US10607362B2 (en) Remote determination of containers in geographical region
US10192323B2 (en) Remote determination of containers in geographical region
Corcoran et al. Segmentation performance evaluation for object-based remotely sensed image analysis
CN112085840B (en) Semantic segmentation method, semantic segmentation device, semantic segmentation equipment and computer readable storage medium
Clouard et al. An ontology-based model for representing image processing application objectives
de Gélis et al. Siamese KPConv: 3D multiple change detection from raw point clouds using deep learning
CN103617413B (en) Method for identifying object in image
EP3553700A2 (en) Remote determination of containers in geographical region
CN109815854B (en) Method and device for presenting associated information of icon on user equipment
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN112819001A (en) Complex scene cigarette packet identification method and device based on deep learning
Gao et al. Forest fire smoke detection based on visual smoke root and diffusion model
CN116205978A (en) Method, device, equipment and storage medium for determining mapping image of three-dimensional target object
CN114898322A (en) Driving environment identification method and device, vehicle and storage medium
CN115115903A (en) Sample data generation method and device and storage medium
CN117037103A (en) Road detection method and device
EP4220552A1 (en) Image processing method and system
CN117557784B (en) Target detection method, target detection device, electronic equipment and storage medium
CN111382638A (en) Image detection method, device, equipment and storage medium
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium
CN113065521B (en) Object identification method, device, equipment and medium
CN115358981A (en) Glue defect determining method, device, equipment and storage medium
CN113408456A (en) Environment perception algorithm, system, device, electronic equipment and storage medium
Zhao et al. Building extraction from lidar point cloud data using marked point process
CN112001247A (en) Multi-target detection method, equipment and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220927