WO2022095570A1 - Urban vegetation type identification method and system, and device and medium - Google Patents

Urban vegetation type identification method and system, and device and medium Download PDF

Info

Publication number
WO2022095570A1
WO2022095570A1 PCT/CN2021/115177 CN2021115177W WO2022095570A1 WO 2022095570 A1 WO2022095570 A1 WO 2022095570A1 CN 2021115177 W CN2021115177 W CN 2021115177W WO 2022095570 A1 WO2022095570 A1 WO 2022095570A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fused
feature vector
type
color
Prior art date
Application number
PCT/CN2021/115177
Other languages
French (fr)
Chinese (zh)
Inventor
张�杰
Original Assignee
上海圣之尧智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海圣之尧智能科技有限公司 filed Critical 上海圣之尧智能科技有限公司
Publication of WO2022095570A1 publication Critical patent/WO2022095570A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention relates to a technology in the field of vegetation identification, in particular to a method, system, equipment and medium for identifying urban vegetation types.
  • Vegetation types are classified according to the external traits of organisms such as plant structure, shape, size, color, etc., which are jointly determined by genes and the environment.
  • vegetation types are mainly obtained by manual manual measurement or large-scale special imaging systems.
  • Manual manual measurement methods have the disadvantages of low measurement efficiency, low controllability of data accuracy and high uncertainty of observation errors, while large-scale special imaging systems are expensive. Expensive, complex system and poor and insufficient portability.
  • the present invention proposes a method, system, equipment and medium for identifying urban vegetation types, which can detect the multiple detection images of the detection area above the detection area by the drone.
  • the images are spliced to form a complete fused image, and then feature extraction is performed on the fused image to obtain its color features and texture features, and then multiple type coefficients are obtained according to the color features and texture features.
  • the minimum value of the multiple type coefficients corresponds to Vegetation type, the above method can quickly obtain the type of vegetation in the detection area, and the cost is low.
  • a method for identifying urban vegetation types comprising:
  • the fused texture feature vector Based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture feature vectors corresponding to the fused texture feature vector, a plurality of type factor;
  • the vegetation type corresponding to the minimum value among the plurality of type coefficients is output.
  • performing image splicing on the plurality of detection images to obtain a fusion image of the detection area includes:
  • performing image preprocessing on each of the detected images includes:
  • Image enhancement is performed on the detected image after image smoothing.
  • the calculation formula of the type coefficient is:
  • a i is the type color feature vector
  • c n is the fusion color feature vector
  • t n is the fusion texture feature vector
  • i is the vegetation type, and i is an integer.
  • the type color feature vector and the fusion color feature vector are both 256-dimensional vectors
  • the type texture feature vector and the fusion texture feature vector are both 3-dimensional vectors
  • a method for identifying urban vegetation types comprising:
  • a plurality of detection images of the detection area are collected by a drone, wherein the detection images correspond to parts of the detection area;
  • the unmanned aerial vehicle performs image stitching on the plurality of detection images to obtain a fusion image of the detection area
  • the UAV performs feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
  • the UAV is based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture features corresponding to the fused texture feature vector. vector to obtain multiple type coefficients;
  • the UAV outputs the vegetation type corresponding to the minimum value among the plurality of type coefficients.
  • performing feature extraction on the fused image to obtain the fused color feature vector and fused texture feature vector of the fused image further includes:
  • a system for identifying urban vegetation types comprising:
  • a collection module collecting a plurality of detection images of a detection area, wherein the detection images correspond to a part of the detection area;
  • a fusion module performing image splicing on the plurality of detection images to obtain a fusion image of the detection area
  • an extraction module which performs feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image
  • a coefficient acquisition module based on the fusion color feature vector, the fusion texture feature vector, a plurality of types of color feature vectors corresponding to the fusion color feature vector, and a plurality of types of texture feature vectors corresponding to the fusion texture feature vector , to obtain multiple type coefficients;
  • an output module outputting the vegetation type corresponding to the minimum value of the plurality of type coefficients
  • a device for identifying urban vegetation types comprising:
  • the processor is configured to execute the steps of the above-mentioned urban vegetation type identification method by executing the executable instructions.
  • a computer-readable storage medium for storing a program, which implements the steps of the above-mentioned method for identifying urban vegetation types when the program is executed.
  • the copper foil defect detection system in the present invention can extract the detection image of the copper foil to be tested, and compare the detection image with the standard copper foil image.
  • the comparison speed is improved, and the detection of the copper foils on both sides can be completed in a short time.
  • FIG. 1 is a schematic diagram of an implementation scenario of the present invention
  • Figure 2 is a schematic diagram of a method for identifying urban vegetation types
  • FIG. 3 is a schematic diagram of a fusion image acquisition process
  • FIG. 5 is a schematic diagram of a feature extraction process
  • Fig. 6 is another kind of urban vegetation type identification method of the present invention.
  • Figure 7 is a block diagram of the results of an urban vegetation type identification system
  • FIG. 8 is a schematic diagram of an urban vegetation type identification device of the present invention.
  • FIG. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention.
  • first”, “second” and similar words do not denote any order, quantity, or importance, but are merely used to distinguish the various components.
  • “Comprises” or “comprising” and similar words mean that the elements or things appearing before the word encompass the elements or things recited after the word and their equivalents, but do not exclude other elements or things. Words like “connected” or “connected” are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "Up”, “Down”, “Left”, “Right”, etc. are only used to represent the relative positional relationship, and when the absolute position of the described object changes, the relative positional relationship may also change accordingly.
  • a method for identifying urban vegetation types is provided.
  • FIG. 1 is a schematic diagram of an implementation scenario of the present invention.
  • FIG. 1 shows an implementation scenario 100 of a method for identifying urban vegetation types.
  • a drone 102 is arranged above the detection area 101 shown in FIG. 1 , and the drone 102 flies above the detection area 101 .
  • the drone 102 hovering over the detection area 101 is provided with an image acquisition device 103 .
  • the drone 101 uses the image acquisition device 103 to collect detection images of the detection area 101 , and each detection image only includes a part of the detection area 101 , because the image acquisition device 103 of the drone 101 cannot capture the detection area 101 at one time. full image.
  • FIG. 2 is a schematic diagram of a method for identifying urban vegetation types.
  • the urban vegetation type identification method shown in FIG. 2 includes steps S101, S102, S103, S104, and S105.
  • Step S101 collecting a plurality of detection images of a detection area, wherein the detection images correspond to parts of the detection area.
  • Step S102 performing image splicing on a plurality of detection images to obtain a fusion image of a detection area.
  • Step S103 perform feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image.
  • Step S104 Obtain multiple type coefficients based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture feature vectors corresponding to the fused texture feature vector.
  • Step S105 output the vegetation type corresponding to the minimum value among the plurality of type coefficients.
  • FIG. 3 is a schematic diagram of a fusion image acquisition process.
  • step S102 specifically includes the following steps: step S201 , step S202 , and step S203 .
  • step S201 image preprocessing is performed on each detected image.
  • step S202 image registration is performed on the detection image after image preprocessing to obtain a region image.
  • Image registration is a key step in the process of image stitching, and the accuracy of registration determines the quality of regional images obtained by stitching.
  • Image registration refers to aligning two or more images in space, and it achieves registration by calculating the best match between the two images.
  • step S203 image fusion is performed on the region images to obtain a fusion image.
  • Image fusion is to eliminate the obvious splicing traces of the image, and realize pixel fusion and interpolation on the overlapping part of the image, so as to obtain a smooth and seamless image, that is, a fusion image.
  • FIG. 4 is a schematic diagram of an image preprocessing acquisition process.
  • step S201 specifically includes step S301, step S302, and step S303.
  • step S301 image distortion correction is performed on each detected image. Since the image acquisition device used for UAV aerial photography is an ordinary digital camera, there is optical distortion at the edge of the image, and the distortion will deviate the actual image point position in the image, causing displacement of the image point coordinates and changing the ground of the actual object. The position will ultimately affect the matching accuracy and the generation of digital orthoprojection products. Therefore, the distortion difference must be corrected before post-processing of the image.
  • the self-checking method in the aerial triangulation operation of the area network is mainly used to correct the possible systematic errors, including the actual measurement focal length f of the camera, and the image principal point offset value ⁇ x , ⁇ y, various distortion parameters of the objective lens, etc., among which:
  • ⁇ x, ⁇ y are the distortion correction parameters of the digital camera
  • (x,y) is the coordinate of the image point in the image plane coordinate system
  • a is the non-square scale factor of CCD
  • b is the non-orthogonality distortion coefficient
  • step S302 image smoothing is performed on the detected image.
  • the detection images collected by UAVs will contain more or less noise due to external or internal interference, which will degrade the quality of the images.
  • the purpose of image smoothing is to reduce noise and improve image quality.
  • image enhancement is performed on the detected image after image smoothing. Its purpose is to improve the visual effect of the image, so that the processed image can meet the needs of some special analysis more than the original image. After the detection image of the UAV is enhanced, it can be more suitable for human observation and computer analysis and processing. It lays a good foundation for subsequent analysis and processing, and it does not increase the relevant information in the image data and change the basic characteristics of the image.
  • FIG. 5 is a schematic diagram of a feature extraction process.
  • step S103 further includes step S401 and step S402.
  • step S401 the fused image is converted from the RGB color space to the HSV color space.
  • Common color spaces include RGB color space and HSV color space. Since the RGB color space has the disadvantages of not intuitively representing the cognitive attributes of colors and the color space is not uniform, the present invention will use the HSV color space to represent the color features.
  • the HSV color space is a color model for visual perception.
  • the human eye's perception of color mainly includes three elements: one is the hue of the color, the other is the saturation of the color, and the third is the brightness of the color.
  • Hue refers to the color of the light in the color image
  • Saturation refers to the depth of the color in the color image
  • Brightness refers to the human eye perceives the light from the color image.
  • the degree of light and shade is proportional to the reflectivity of the object. Since the fused image is initially displayed in the RGB color model, it is necessary to convert the color value from the RGB expression to the HSV color space expression. Converts to a set of (H, S, V) values for a given set of (R, G, B) values in the RGB color space. Obtained by the following formula:
  • step S402 feature extraction is performed on the fused image in the HSV color space to obtain a fused color feature vector and a fused texture feature vector.
  • step S104 the calculation formula of the type coefficient is:
  • a i is the type color feature vector
  • c n is the fusion color feature vector
  • t n is the fusion texture feature vector
  • i is the vegetation type, and i is an integer.
  • the type color feature vector and the fusion color feature vector are both 256-dimensional vectors; the type texture feature vector and the fusion texture feature vector are both 3-dimensional vectors.
  • Another method for identifying urban vegetation types is provided.
  • FIG. 6 is another method for identifying urban vegetation types according to the present invention.
  • the method shown in FIG. 6 includes: step S501, above a detection area, collecting a plurality of detection images of the detection area by a drone, wherein the detection images correspond to parts of the detection area; step S502, the drone will Image splicing is performed on the detection images to obtain a fusion image of a detection area; step S503, the UAV performs feature extraction on the fusion image to obtain a fusion color feature vector and a fusion texture feature vector of the fusion image; step S504, the drone is based on the fusion color Feature vectors, fused texture feature vectors, multiple types of color feature vectors corresponding to the fused color feature vectors, and multiple types of texture feature vectors corresponding to the fused texture feature vectors, to obtain multiple type coefficients; step S505, the UAV outputs multiple types of coefficients; The vegetation type corresponding to the minimum value of the type coefficients.
  • an urban vegetation type identification system is provided.
  • FIG. 7 is a block diagram of the results of an urban vegetation type identification system.
  • the system 200 shown in FIG. 7 includes:
  • the collection module 201 collects a plurality of detection images of a detection area, wherein the detection images correspond to parts of the detection area;
  • the fusion module 202 performs image splicing on a plurality of detection images to obtain a fusion image of a detection area;
  • Extraction module 203 performing feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
  • the coefficient acquisition module 204 obtains multiple types of coefficients based on the fusion color feature vector, the fusion texture feature vector, the multiple types of color feature vectors corresponding to the fusion color feature vector, and the multiple types of texture feature vectors corresponding to the fusion texture feature vector;
  • the output module 205 outputs the vegetation type corresponding to the minimum value among the plurality of type coefficients.
  • a device for identifying urban vegetation types comprising: a processor; a memory, in which executable instructions of the processor are stored; wherein, when the executable instructions are executed, the processor executes a method for identifying urban vegetation types A step of.
  • FIG. 8 is a schematic diagram of the apparatus for identifying urban vegetation types according to the present invention.
  • the electronic device 600 according to this embodiment of the present invention is described below with reference to FIG. 8 .
  • the electronic device 600 shown in FIG. 8 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
  • electronic device 600 takes the form of a general-purpose computing device.
  • Components of the electronic device 600 may include, but are not limited to, at least one processing unit 610, at least one storage unit 620, a bus 630 connecting different platform components (including the storage unit 620 and the processing unit 610), a display unit 640, and the like.
  • the storage unit stores program codes, and the program codes can be executed by the processing unit 610, so that the processing unit 610 executes the above steps in this specification.
  • the processing unit 610 may perform the steps shown in FIG. 2 .
  • the storage unit 620 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 6201 and/or a cache storage unit 6202 , and may further include a read only storage unit (ROM) 6203 .
  • RAM random access storage unit
  • ROM read only storage unit
  • the storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, An implementation of a network environment may be included in each or some combination of these examples.
  • the bus 630 may be representative of one or more of several types of bus structures, including a memory cell bus or memory cell controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any of a variety of bus structures bus.
  • the electronic device 600 may also communicate with one or more external devices 700 (eg, keyboards, pointing devices, Bluetooth devices, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with Any device (eg, router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 650 . Also, the electronic device 600 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 660 . Network adapter 660 may communicate with other modules of electronic device 600 through bus 630 . It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage platform, etc.
  • a computer-readable storage medium for storing a program, which implements the steps of the above method when the program is executed.
  • FIG. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention.
  • a program product 800 for implementing the above method according to an embodiment of the present invention is described, which can adopt a portable compact disk read only memory (CD-ROM) and include program codes, and can be used in a terminal device, For example running on a personal computer.
  • CD-ROM compact disk read only memory
  • the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the program product may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable storage medium can also be any readable medium other than a readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a readable storage medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (eg, using an Internet service provider business via an Internet connection).
  • LAN local area network
  • WAN wide area network
  • an external computing device eg, using an Internet service provider business via an Internet connection
  • the method, system, device and medium for identifying urban vegetation types in the present invention can splicing the multiple detection images to form a complete detection image through the multiple detection images of the detection area above the detection area by the drone. Fuse the images, and then perform feature extraction on the fused image to obtain its color features and texture features, and then calculate and obtain multiple type coefficients according to the color features and texture features, and the minimum value of the multiple type coefficients corresponds to the vegetation type. Quickly obtain the type of vegetation in the detection area with low cost.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An urban vegetation type identification method and system, and a device and a medium. The method comprises: acquiring a plurality of detection images of a detection area, wherein the detection images correspond to part of the detection area; performing image stitching on the plurality of detection images so as to obtain a fused image of the detection area; performing feature extraction on the fused image so as to obtain fused color feature vectors and fused texture feature vectors of the fused image; on the basis of the fused color feature vectors, the fused texture feature vectors, a plurality of type color feature vectors corresponding to the fused color feature vectors, and a plurality of type texture feature vectors corresponding to the fused texture feature vectors, obtaining a plurality of type coefficients; and outputting a vegetation type corresponding to the minimum value in the plurality of type coefficients.

Description

一种城市植被种类识别方法、系统、设备以及介质A method, system, equipment and medium for identifying urban vegetation types 技术领域technical field
本发明涉及的是一种植被识别领域的技术,具体是一种城市植被种类识别方法、系统、设备以及介质。The invention relates to a technology in the field of vegetation identification, in particular to a method, system, equipment and medium for identifying urban vegetation types.
背景技术Background technique
城市规划管理中,需要对城市植被的种类进行识别。植被种类是指根据基因和环境共同决定的植物结构、形状、大小、颜色等生物体的外在性状来分类。当前,植被种类主要通过人工手动测定或者大型特用成像系统获取,人工手动测量方式具有测定效率低、数据精度可控性低和观测误差不确定性高劣势,而大型特用成像系统则具有价格昂贵、系统复杂和便携性差和不足。In urban planning and management, it is necessary to identify the types of urban vegetation. Vegetation types are classified according to the external traits of organisms such as plant structure, shape, size, color, etc., which are jointly determined by genes and the environment. At present, vegetation types are mainly obtained by manual manual measurement or large-scale special imaging systems. Manual manual measurement methods have the disadvantages of low measurement efficiency, low controllability of data accuracy and high uncertainty of observation errors, while large-scale special imaging systems are expensive. Expensive, complex system and poor and insufficient portability.
发明内容SUMMARY OF THE INVENTION
本发明针对现有技术存在的上述不足,提出一种城市植被种类识别方法、系统、设备以及介质,能够通过无人机在检测区域的上方该检测区域的多个检测图像,将该多个检测图像进行拼接形成一个完整的融合图像,进而对融合图像进行特征提取获得其颜色特征以及纹理特征,而后根据颜色特征和纹理特征计算获得多个类型系数,多个类型系数中最小值对应的即为植被类型,通过上述方法能够快速获取检测区域中的植被的类型,成本低。Aiming at the above-mentioned shortcomings of the prior art, the present invention proposes a method, system, equipment and medium for identifying urban vegetation types, which can detect the multiple detection images of the detection area above the detection area by the drone. The images are spliced to form a complete fused image, and then feature extraction is performed on the fused image to obtain its color features and texture features, and then multiple type coefficients are obtained according to the color features and texture features. The minimum value of the multiple type coefficients corresponds to Vegetation type, the above method can quickly obtain the type of vegetation in the detection area, and the cost is low.
根据本发明的一个方面,一种城市植被种类识别方法,包括:According to one aspect of the present invention, a method for identifying urban vegetation types, comprising:
采集一检测区域的多个检测图像,其中,所述检测图像对应于所述检测区域的局部;collecting a plurality of detection images of a detection area, wherein the detection images correspond to parts of the detection area;
将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像;Perform image splicing on the plurality of detection images to obtain a fusion image of the detection area;
对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量;Perform feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
基于所述融合颜色特征向量、所述融合纹理特征向量、与所述融合颜色特征向量对应的多个类型颜色特征向量以及与所述融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数;Based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture feature vectors corresponding to the fused texture feature vector, a plurality of type factor;
输出所述多个类型系数中的最小值所对应的植被类型。The vegetation type corresponding to the minimum value among the plurality of type coefficients is output.
优选的,所述将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像包括:Preferably, performing image splicing on the plurality of detection images to obtain a fusion image of the detection area includes:
对每一所述检测图像进行图像预处理;performing image preprocessing on each of the detected images;
将进行所述图像预处理后的所述检测图像进行图像配准获得一区域图像;performing image registration on the detection image after the image preprocessing to obtain an area image;
对所述区域图像进行图像融合以获得所述融合图像。Perform image fusion on the region image to obtain the fusion image.
优选的,所述对每一所述检测图像进行图像预处理包括:Preferably, performing image preprocessing on each of the detected images includes:
对每一所述检测图像进行图像畸变纠正;performing image distortion correction on each of the detected images;
对所述检测图像进行图像平滑;performing image smoothing on the detected image;
对经过图像平滑后的所述检测图像进行图像增强。Image enhancement is performed on the detected image after image smoothing.
优选的,所述类型系数的计算公式为:Preferably, the calculation formula of the type coefficient is:
Figure PCTCN2021115177-appb-000001
Figure PCTCN2021115177-appb-000001
Figure PCTCN2021115177-appb-000002
Figure PCTCN2021115177-appb-000002
其中:in:
d ni为类型系数; d ni is the type coefficient;
a i为类型颜色特征向量; a i is the type color feature vector;
b i为类型纹理特征向量; b i is the type texture feature vector;
c n为融合颜色特征向量; c n is the fusion color feature vector;
t n为融合纹理特征向量; t n is the fusion texture feature vector;
i为植被类型,i为整数。i is the vegetation type, and i is an integer.
优选的,所述类型颜色特征向量以及所述融合颜色特征向量均为256维向量;Preferably, the type color feature vector and the fusion color feature vector are both 256-dimensional vectors;
所述类型纹理特征向量以及所述融合纹理特征向量均为3维向量The type texture feature vector and the fusion texture feature vector are both 3-dimensional vectors
根据本发明一个方面,提供一种城市植被种类识别方法,包括:According to an aspect of the present invention, a method for identifying urban vegetation types is provided, comprising:
于一检测区域上方,通过一无人机采集所述检测区域的多个检测图像,其中,所述检测图像对应于所述检测区域的局部;Above a detection area, a plurality of detection images of the detection area are collected by a drone, wherein the detection images correspond to parts of the detection area;
所述无人机将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像;The unmanned aerial vehicle performs image stitching on the plurality of detection images to obtain a fusion image of the detection area;
所述无人机对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量;The UAV performs feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
所述无人机基于所述融合颜色特征向量、所述融合纹理特征向量、与所述融合颜色特征向量对应的多个类型颜色特征向量以及与所述融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数;The UAV is based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture features corresponding to the fused texture feature vector. vector to obtain multiple type coefficients;
所述无人机输出所述多个类型系数中的最小值所对应的植被类型。The UAV outputs the vegetation type corresponding to the minimum value among the plurality of type coefficients.
优选的,所述对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量还包括:Preferably, performing feature extraction on the fused image to obtain the fused color feature vector and fused texture feature vector of the fused image further includes:
将所述融合图像由RGB颜色空间转换为HSV颜色空间;converting the fused image from the RGB color space to the HSV color space;
对所述HSV颜色空间的所述融合图像进行特征提取获得特征提取获得所述融合颜色特征向量和所述融合纹理特征向量。Perform feature extraction on the fused image in the HSV color space to obtain the feature extraction to obtain the fused color feature vector and the fused texture feature vector.
根据本发明的一个方面,提供一种城市植被种类识别系统,包括:According to one aspect of the present invention, a system for identifying urban vegetation types is provided, comprising:
采集模块,采集一检测区域的多个检测图像,其中,所述检测图像对应于所述检测区域的局部;a collection module, collecting a plurality of detection images of a detection area, wherein the detection images correspond to a part of the detection area;
融合模块,将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像;a fusion module, performing image splicing on the plurality of detection images to obtain a fusion image of the detection area;
提取模块,对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量;an extraction module, which performs feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
系数获取模块,基于所述融合颜色特征向量、所述融合纹理特征 向量、与所述融合颜色特征向量对应的多个类型颜色特征向量以及与所述融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数;A coefficient acquisition module, based on the fusion color feature vector, the fusion texture feature vector, a plurality of types of color feature vectors corresponding to the fusion color feature vector, and a plurality of types of texture feature vectors corresponding to the fusion texture feature vector , to obtain multiple type coefficients;
输出模块,输出所述多个类型系数中的最小值所对应的植被类型an output module, outputting the vegetation type corresponding to the minimum value of the plurality of type coefficients
根据本发明的一个方面,提供一种城市植被种类识别设备,包括:According to one aspect of the present invention, a device for identifying urban vegetation types is provided, comprising:
处理器;processor;
存储器,其中存储有所述处理器的可执行指令;a memory in which executable instructions for the processor are stored;
其中,所述处理器配置为经由执行所述可执行指令来执行上述城市植被种类识别方法的步骤。Wherein, the processor is configured to execute the steps of the above-mentioned urban vegetation type identification method by executing the executable instructions.
根据本发明的一个方面,提供一种计算机可读存储介质,用于存储程序,所述程序被执行时实现上述城市植被种类识别方法的步骤。According to one aspect of the present invention, a computer-readable storage medium is provided for storing a program, which implements the steps of the above-mentioned method for identifying urban vegetation types when the program is executed.
上述技术方案的有益效果是:The beneficial effects of the above technical solutions are:
本发明中的铜箔缺陷检测系统,能够提取待测铜箔的检测图像,并且将该检测图像与标准铜箔图像进行对比,对比过程中,以检测图像的像素单元为最小对比单元,从而能够提高对比速度,实现在较短的时间内完成对该两侧铜箔的检测。The copper foil defect detection system in the present invention can extract the detection image of the copper foil to be tested, and compare the detection image with the standard copper foil image. The comparison speed is improved, and the detection of the copper foils on both sides can be completed in a short time.
本发明的其它特征和优点以及本发明的各种实施例的结构和操作,将在以下参照附图进行详细的描述。应当注意,本发明不限于本文描述的具体实施例。在本文给出的这些实施例仅仅是为了说明的目的。Other features and advantages of the present invention, as well as the structure and operation of various embodiments of the present invention, will be described in detail below with reference to the accompanying drawings. It should be noted that the present invention is not limited to the specific embodiments described herein. These examples are presented herein for illustrative purposes only.
附图说明Description of drawings
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显。Other features, objects and advantages of the present invention will become more apparent upon reading the detailed description of non-limiting embodiments with reference to the following drawings.
图1是本发明的实施场景示意图;1 is a schematic diagram of an implementation scenario of the present invention;
图2是一种城市植被种类识别方法示意图;Figure 2 is a schematic diagram of a method for identifying urban vegetation types;
图3是一种融合图像获得过程示意图;3 is a schematic diagram of a fusion image acquisition process;
图4是一种图像预处理获得过程示意图;4 is a schematic diagram of an image preprocessing acquisition process;
图5为一种特征提取过程示意图;5 is a schematic diagram of a feature extraction process;
图6为本发明的另一种城市植被种类识别方法;Fig. 6 is another kind of urban vegetation type identification method of the present invention;
图7是一种城市植被种类识别系统结果框图;Figure 7 is a block diagram of the results of an urban vegetation type identification system;
图8是本发明的城市植被种类识别设备示意图;8 is a schematic diagram of an urban vegetation type identification device of the present invention;
图9是本发明的计算机可读存储介质的结构示意图。FIG. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention.
从以下结合附图的详细描述中,本发明的特征和优点将变得更加明显。贯穿附图,相同的附图标识相应元素。在附图中,相同附图标记通常指示相同的、功能上相似的和/或结构上相似的元件。The features and advantages of the present invention will become more apparent from the following detailed description taken in conjunction with the accompanying drawings. Throughout the drawings, like drawings identify corresponding elements. In the drawings, the same reference numbers generally refer to identical, functionally similar and/or structurally similar elements.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本申请中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。As used in this application, "first", "second" and similar words do not denote any order, quantity, or importance, but are merely used to distinguish the various components. "Comprises" or "comprising" and similar words mean that the elements or things appearing before the word encompass the elements or things recited after the word and their equivalents, but do not exclude other elements or things. Words like "connected" or "connected" are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "Up", "Down", "Left", "Right", etc. are only used to represent the relative positional relationship, and when the absolute position of the described object changes, the relative positional relationship may also change accordingly.
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。It should be noted that the embodiments of the present invention and the features of the embodiments may be combined with each other under the condition of no conflict.
下面结合附图和具体实施例对本发明作进一步说明,但不作为本发明的限定。The present invention will be further described below with reference to the accompanying drawings and specific embodiments, but it is not intended to limit the present invention.
根据本发明的一个方面,提供一种城市植被种类识别方法。According to an aspect of the present invention, a method for identifying urban vegetation types is provided.
图1是本发明的实施场景示意图。图1示出了城市植被种类识别方法的实施场景100,图1中示出的检测区域101的上方设置有一无人机 102,该无人机102飞行于检测区域101的上方,检测区域101中包含有建筑物以及植被,例如数目以及花草等。悬停在检测区域101上方的无人机102设置有图像采集装置103。无人机101利用图像采集装置103来采集检测区域101的检测图像,每一检测图像只包含检测区域101的部分,因为,无人机101的图像采集装置103无法一次性拍摄获得检测区域101的完整图像。FIG. 1 is a schematic diagram of an implementation scenario of the present invention. FIG. 1 shows an implementation scenario 100 of a method for identifying urban vegetation types. A drone 102 is arranged above the detection area 101 shown in FIG. 1 , and the drone 102 flies above the detection area 101 . Contains buildings and vegetation, such as numbers and flowers. The drone 102 hovering over the detection area 101 is provided with an image acquisition device 103 . The drone 101 uses the image acquisition device 103 to collect detection images of the detection area 101 , and each detection image only includes a part of the detection area 101 , because the image acquisition device 103 of the drone 101 cannot capture the detection area 101 at one time. full image.
图2是一种城市植被种类识别方法示意图。图2中示出的城市植被种类识别方法包括步骤S101、步骤S102、步骤S103、步骤S104、步骤S105。步骤S101,采集一检测区域的多个检测图像,其中,所述检测图像对应于检测区域的局部。步骤S102,将多个检测图像进行图像拼接,获得一检测区域的融合图像。步骤S103,对融合图像进行特征提取获得融合图像的融合颜色特征向量和融合纹理特征向量。步骤S104,基于融合颜色特征向量、融合纹理特征向量、与融合颜色特征向量对应的多个类型颜色特征向量以及与融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数。步骤S105,输出多个类型系数中的最小值所对应的植被类型。Figure 2 is a schematic diagram of a method for identifying urban vegetation types. The urban vegetation type identification method shown in FIG. 2 includes steps S101, S102, S103, S104, and S105. Step S101 , collecting a plurality of detection images of a detection area, wherein the detection images correspond to parts of the detection area. Step S102, performing image splicing on a plurality of detection images to obtain a fusion image of a detection area. Step S103, perform feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image. Step S104: Obtain multiple type coefficients based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture feature vectors corresponding to the fused texture feature vector. Step S105, output the vegetation type corresponding to the minimum value among the plurality of type coefficients.
图3是一种融合图像获得过程示意图。参考图3,步骤S102具体包括以下步骤:步骤S201、步骤S202、步骤S203。在步骤S201中,对每一检测图像进行图像预处理。在步骤S202中,将进行图像预处理后的检测图像进行图像配准获得一区域图像。图像配准是图像拼接过程中的关键步骤,配准的精度决定了拼接获得的区域图像的质量。图像配准是指将两幅或者多幅图像在空间位置上进行对准,它通过计算两幅图像之间的最佳匹配实现配准。在步骤S203中,对区域图像进行图像融合以获得融合图像。图像融合就是为了消除图像明显的拼接痕迹,对图像的重叠部分实现像素融合、插值,从而得到平滑无缝的图像即融合图像。FIG. 3 is a schematic diagram of a fusion image acquisition process. Referring to FIG. 3 , step S102 specifically includes the following steps: step S201 , step S202 , and step S203 . In step S201, image preprocessing is performed on each detected image. In step S202, image registration is performed on the detection image after image preprocessing to obtain a region image. Image registration is a key step in the process of image stitching, and the accuracy of registration determines the quality of regional images obtained by stitching. Image registration refers to aligning two or more images in space, and it achieves registration by calculating the best match between the two images. In step S203, image fusion is performed on the region images to obtain a fusion image. Image fusion is to eliminate the obvious splicing traces of the image, and realize pixel fusion and interpolation on the overlapping part of the image, so as to obtain a smooth and seamless image, that is, a fusion image.
图4是一种图像预处理获得过程示意图。参考图4,步骤S201具体包括步骤S301、步骤S302、步骤S303。在步骤S301中,对每一检测图像进行图像畸变纠正。由于无人机航拍用的图像采集装置为普通的数码相机,其像片边缘存在光学畸变,畸变会使图像中的实际像点位置发生偏离, 使像点坐标产生位移,改变了实际物体的地面位置,会最终影响到影响匹配的精度和数字正射影产品的生成,因此必须对畸变差进行校正后,才能进行后期的影像处理。对于无人机图像的镜头畸变纠正即图像畸变纠正,主要是采用区域网空中三角测量运算中的自检法将可能存在的系统误差,包括相机的实际测量焦距f、像主点偏移值Δx、Δy、物镜各畸变参数等,其中:FIG. 4 is a schematic diagram of an image preprocessing acquisition process. Referring to FIG. 4, step S201 specifically includes step S301, step S302, and step S303. In step S301, image distortion correction is performed on each detected image. Since the image acquisition device used for UAV aerial photography is an ordinary digital camera, there is optical distortion at the edge of the image, and the distortion will deviate the actual image point position in the image, causing displacement of the image point coordinates and changing the ground of the actual object. The position will ultimately affect the matching accuracy and the generation of digital orthoprojection products. Therefore, the distortion difference must be corrected before post-processing of the image. For the lens distortion correction of the UAV image, that is, the image distortion correction, the self-checking method in the aerial triangulation operation of the area network is mainly used to correct the possible systematic errors, including the actual measurement focal length f of the camera, and the image principal point offset value Δx , Δy, various distortion parameters of the objective lens, etc., among which:
Figure PCTCN2021115177-appb-000003
Figure PCTCN2021115177-appb-000003
Figure PCTCN2021115177-appb-000004
Figure PCTCN2021115177-appb-000004
Figure PCTCN2021115177-appb-000005
Figure PCTCN2021115177-appb-000005
Figure PCTCN2021115177-appb-000006
Figure PCTCN2021115177-appb-000006
Figure PCTCN2021115177-appb-000007
Figure PCTCN2021115177-appb-000007
Δx,Δy为数码相机的畸变改正参数;Δx, Δy are the distortion correction parameters of the digital camera;
(x,y)是像点在像平面坐标系中的坐标;(x,y) is the coordinate of the image point in the image plane coordinate system;
(x 0,y 0)为像主点坐标; (x 0 , y 0 ) is the image principal point coordinate;
(k 1,k 2)为径向畸变系数; (k 1 , k 2 ) is the radial distortion coefficient;
(p 1,p 2)为偏向畸变系数; (p 1 ,p 2 ) is the bias distortion coefficient;
a为CCD非正方形比例系数;a is the non-square scale factor of CCD;
b为非正交性畸变系数。b is the non-orthogonality distortion coefficient.
在步骤S302中,对检测图像进行图像平滑。无人机所采集的检测图像在形成、传输、接收和处理过程中,因受到外部或内部干扰,都会含有或多或少的噪声,使图像的质量下降。而图像平滑的目的就是减少噪声,改善图像质量。在步骤S302中,对经过图像平滑后的检测图像进行图像增强。其目的是改善图像的视觉效果,使处理后的图像比原始图像更加满足某些特殊分析的需要。无人机的检测图像经增强处理后,能更适合于人眼观察和计算机分析处理。为后续分析与处理打下良好基础,它并不会使图像数据中的相关信息有所增加而改变图像的基本特征。In step S302, image smoothing is performed on the detected image. During the process of formation, transmission, reception and processing, the detection images collected by UAVs will contain more or less noise due to external or internal interference, which will degrade the quality of the images. The purpose of image smoothing is to reduce noise and improve image quality. In step S302, image enhancement is performed on the detected image after image smoothing. Its purpose is to improve the visual effect of the image, so that the processed image can meet the needs of some special analysis more than the original image. After the detection image of the UAV is enhanced, it can be more suitable for human observation and computer analysis and processing. It lays a good foundation for subsequent analysis and processing, and it does not increase the relevant information in the image data and change the basic characteristics of the image.
图5为一种特征提取过程示意图。参考图5,步骤S103还包括步 骤S401以及步骤S402。在步骤S401中,将融合图像由RGB颜色空间转换为HSV颜色空间。对于无人机采集回来的融合图像,要提取到可用的颜色特征,必须要在特定的颜色空间中进行提取才行,常见的颜色空间有RGB颜色空间和HSV颜色空间。因RGB颜色空间存在表示颜色的认知属性不直观和颜色空间不均匀的不足,本发明将采用HSV颜色空间来表征颜色特征。HSV颜色空间是一种面向视觉感知的颜色模型,人眼对色彩的感知主要包括三个要素:一是颜色的色调,二是颜色的饱和度,三是颜色的亮度。其中,色调(Hue)指的是彩色图像中光的颜色;饱和度(Saturation)指的是彩色图像中彩色的深浅程度;亮度(Value)指的是人眼从彩色图像中感受到的光的明暗程度,光的明暗程度与物体的反射率成正比。由于融合图像初始为RGB彩色模型来显示,因此需要将颜色值由RGB方式表达转换为HSV颜色空间表达。对于给定的RGB颜色空间中一组(R,G,B)值,转换至一组(H,S,V)值。通过以下公式获得:FIG. 5 is a schematic diagram of a feature extraction process. Referring to Fig. 5, step S103 further includes step S401 and step S402. In step S401, the fused image is converted from the RGB color space to the HSV color space. For the fusion image collected by the drone, to extract the available color features, it must be extracted in a specific color space. Common color spaces include RGB color space and HSV color space. Since the RGB color space has the disadvantages of not intuitively representing the cognitive attributes of colors and the color space is not uniform, the present invention will use the HSV color space to represent the color features. The HSV color space is a color model for visual perception. The human eye's perception of color mainly includes three elements: one is the hue of the color, the other is the saturation of the color, and the third is the brightness of the color. Among them, Hue (Hue) refers to the color of the light in the color image; Saturation (Saturation) refers to the depth of the color in the color image; Brightness (Value) refers to the human eye perceives the light from the color image. The degree of light and shade, the degree of light and shade of light is proportional to the reflectivity of the object. Since the fused image is initially displayed in the RGB color model, it is necessary to convert the color value from the RGB expression to the HSV color space expression. Converts to a set of (H, S, V) values for a given set of (R, G, B) values in the RGB color space. Obtained by the following formula:
Figure PCTCN2021115177-appb-000008
Figure PCTCN2021115177-appb-000008
Figure PCTCN2021115177-appb-000009
Figure PCTCN2021115177-appb-000009
Figure PCTCN2021115177-appb-000010
Figure PCTCN2021115177-appb-000010
Figure PCTCN2021115177-appb-000011
Figure PCTCN2021115177-appb-000011
在步骤S402中,对HSV颜色空间的融合图像进行特征提取获得融合颜色特征向量和融合纹理特征向量。In step S402, feature extraction is performed on the fused image in the HSV color space to obtain a fused color feature vector and a fused texture feature vector.
在步骤S104中,类型系数的计算公式为:In step S104, the calculation formula of the type coefficient is:
Figure PCTCN2021115177-appb-000012
Figure PCTCN2021115177-appb-000012
Figure PCTCN2021115177-appb-000013
Figure PCTCN2021115177-appb-000013
其中:in:
d ni为类型系数; d ni is the type coefficient;
a i为类型颜色特征向量; a i is the type color feature vector;
b i为类型纹理特征向量; b i is the type texture feature vector;
c n为融合颜色特征向量; c n is the fusion color feature vector;
t n为融合纹理特征向量; t n is the fusion texture feature vector;
i为植被类型,i为整数。i is the vegetation type, and i is an integer.
类型颜色特征向量以及融合颜色特征向量均为256维向量;类型纹理特征向量以及融合纹理特征向量均为3维向量。The type color feature vector and the fusion color feature vector are both 256-dimensional vectors; the type texture feature vector and the fusion texture feature vector are both 3-dimensional vectors.
根据本发明的一个方面,提供另一种城市植被种类识别方法。According to one aspect of the present invention, another method for identifying urban vegetation types is provided.
图6为本发明的另一种城市植被种类识别方法。图6示出的方法包括:步骤S501,于一检测区域上方,通过一无人机采集检测区域的多个检测图像,其中,检测图像对应于检测区域的局部;步骤S502,无人机将多个检测图像进行图像拼接,获得一检测区域的融合图像;步骤S503,无人机对融合图像进行特征提取获得融合图像的融合颜色特征向量和融合纹理特征向量;步骤S504,无人机基于融合颜色特征向量、融合纹理特征向量、与融合颜色特征向量对应的多个类型颜色特征向量以及与融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数;步骤S505,无人机输出多个类型系数中的最小值所对应的植被类型。FIG. 6 is another method for identifying urban vegetation types according to the present invention. The method shown in FIG. 6 includes: step S501, above a detection area, collecting a plurality of detection images of the detection area by a drone, wherein the detection images correspond to parts of the detection area; step S502, the drone will Image splicing is performed on the detection images to obtain a fusion image of a detection area; step S503, the UAV performs feature extraction on the fusion image to obtain a fusion color feature vector and a fusion texture feature vector of the fusion image; step S504, the drone is based on the fusion color Feature vectors, fused texture feature vectors, multiple types of color feature vectors corresponding to the fused color feature vectors, and multiple types of texture feature vectors corresponding to the fused texture feature vectors, to obtain multiple type coefficients; step S505, the UAV outputs multiple types of coefficients; The vegetation type corresponding to the minimum value of the type coefficients.
根据本发明的一个方面,提供一种城市植被种类识别系统。According to one aspect of the present invention, an urban vegetation type identification system is provided.
图7是一种城市植被种类识别系统结果框图。图7中示出的系统200包括:Figure 7 is a block diagram of the results of an urban vegetation type identification system. The system 200 shown in FIG. 7 includes:
采集模块201,采集一检测区域的多个检测图像,其中,检测图像对应于检测区域的局部;The collection module 201 collects a plurality of detection images of a detection area, wherein the detection images correspond to parts of the detection area;
融合模块202,将多个检测图像进行图像拼接,获得一检测区域的融合图像;The fusion module 202 performs image splicing on a plurality of detection images to obtain a fusion image of a detection area;
提取模块203,对融合图像进行特征提取获得融合图像的融合颜色特征向量和融合纹理特征向量; Extraction module 203, performing feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
系数获取模块204,基于融合颜色特征向量、融合纹理特征向量、与融合颜色特征向量对应的多个类型颜色特征向量以及与融合纹理特征向 量对应的多个类型纹理特征向量,获得多个类型系数;The coefficient acquisition module 204 obtains multiple types of coefficients based on the fusion color feature vector, the fusion texture feature vector, the multiple types of color feature vectors corresponding to the fusion color feature vector, and the multiple types of texture feature vectors corresponding to the fusion texture feature vector;
输出模块205,输出多个类型系数中的最小值所对应的植被类型。The output module 205 outputs the vegetation type corresponding to the minimum value among the plurality of type coefficients.
根据本发明的一个方面,提供一种城市植被种类识别设备,包括:处理器;存储器,其中存储有处理器的可执行指令;其中,可执行指令在被执行时处理器执行城市植被种类识别方法的步骤。According to one aspect of the present invention, there is provided a device for identifying urban vegetation types, comprising: a processor; a memory, in which executable instructions of the processor are stored; wherein, when the executable instructions are executed, the processor executes a method for identifying urban vegetation types A step of.
图8是本发明的城市植被种类识别设备示意图。下面参照图8来描述根据本发明的这种实施方式的电子设备600。图8显示的电子设备600仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。FIG. 8 is a schematic diagram of the apparatus for identifying urban vegetation types according to the present invention. The electronic device 600 according to this embodiment of the present invention is described below with reference to FIG. 8 . The electronic device 600 shown in FIG. 8 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
如图8所示,电子设备600以通用计算设备的形式表现。电子设备600的组件可以包括但不限于:至少一个处理单元610、至少一个存储单元620、连接不同平台组件(包括存储单元620和处理单元610)的总线630、显示单元640等。As shown in FIG. 8, electronic device 600 takes the form of a general-purpose computing device. Components of the electronic device 600 may include, but are not limited to, at least one processing unit 610, at least one storage unit 620, a bus 630 connecting different platform components (including the storage unit 620 and the processing unit 610), a display unit 640, and the like.
其中,存储单元存储有程序代码,程序代码可以被处理单元610执行,使得处理单元610执行本说明书上述步骤。例如,处理单元610可以执行如图2中所示的步骤。The storage unit stores program codes, and the program codes can be executed by the processing unit 610, so that the processing unit 610 executes the above steps in this specification. For example, the processing unit 610 may perform the steps shown in FIG. 2 .
存储单元620可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)6201和/或高速缓存存储单元6202,还可以进一步包括只读存储单元(ROM)6203。The storage unit 620 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 6201 and/or a cache storage unit 6202 , and may further include a read only storage unit (ROM) 6203 .
存储单元620还可以包括具有一组(至少一个)程序模块6205的程序/实用工具6204,这样的程序模块6205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, An implementation of a network environment may be included in each or some combination of these examples.
总线630可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。The bus 630 may be representative of one or more of several types of bus structures, including a memory cell bus or memory cell controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any of a variety of bus structures bus.
电子设备600也可以与一个或多个外部设备700(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备600交互的设备通信,和/或与使得该电子设备600能与一个或多个其它 计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口650进行。并且,电子设备600还可以通过网络适配器660与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。网络适配器660可以通过总线630与电子设备600的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备600使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储平台等。The electronic device 600 may also communicate with one or more external devices 700 (eg, keyboards, pointing devices, Bluetooth devices, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with Any device (eg, router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 650 . Also, the electronic device 600 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 660 . Network adapter 660 may communicate with other modules of electronic device 600 through bus 630 . It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage platform, etc.
根据本发明的一个方面,提供一种计算机可读存储介质,用于存储程序,程序被执行时实现上述方法的步骤。According to one aspect of the present invention, a computer-readable storage medium is provided for storing a program, which implements the steps of the above method when the program is executed.
图9是本发明的计算机可读存储介质的结构示意图。参考图9所示,描述了根据本发明的实施方式的用于实现上述方法的程序产品800,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。FIG. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to FIG. 9, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which can adopt a portable compact disk read only memory (CD-ROM) and include program codes, and can be used in a terminal device, For example running on a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发 送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。A computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A readable storage medium can also be any readable medium other than a readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as the "C" language or similar programming language. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (eg, using an Internet service provider business via an Internet connection).
综上,本发明中的城市植被种类识别方法、系统、设备以及介质,能够通过无人机在检测区域的上方该检测区域的多个检测图像,将该多个检测图像进行拼接形成一个完整的融合图像,进而对融合图像进行特征提取获得其颜色特征以及纹理特征,而后根据颜色特征和纹理特征计算获得多个类型系数,多个类型系数中最小值对应的即为植被类型,通过上述方法能够快速获取检测区域中的植被的类型,成本低。To sum up, the method, system, device and medium for identifying urban vegetation types in the present invention can splicing the multiple detection images to form a complete detection image through the multiple detection images of the detection area above the detection area by the drone. Fuse the images, and then perform feature extraction on the fused image to obtain its color features and texture features, and then calculate and obtain multiple type coefficients according to the color features and texture features, and the minimum value of the multiple type coefficients corresponds to the vegetation type. Quickly obtain the type of vegetation in the detection area with low cost.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above content is a further detailed description of the present invention in combination with specific preferred embodiments, and it cannot be considered that the specific implementation of the present invention is limited to these descriptions. For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present invention, some simple deductions or substitutions can be made, which should be regarded as belonging to the protection scope of the present invention.

Claims (10)

  1. 一种城市植被种类识别方法,其特征在于,包括:A method for identifying urban vegetation types, comprising:
    采集一检测区域的多个检测图像,其中,所述检测图像对应于所述检测区域的局部;collecting a plurality of detection images of a detection area, wherein the detection images correspond to parts of the detection area;
    将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像;Perform image splicing on the plurality of detection images to obtain a fusion image of the detection area;
    对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量;Perform feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
    基于所述融合颜色特征向量、所述融合纹理特征向量、与所述融合颜色特征向量对应的多个类型颜色特征向量以及与所述融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数;Based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture feature vectors corresponding to the fused texture feature vector, a plurality of type factor;
    输出所述多个类型系数中的最小值所对应的植被类型。The vegetation type corresponding to the minimum value among the plurality of type coefficients is output.
  2. 根据权利要求1所述的城市植被种类识别方法,其特征在于,所述将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像包括:The method for identifying urban vegetation types according to claim 1, wherein the performing image splicing on the plurality of detection images to obtain a fusion image of the detection area comprises:
    对每一所述检测图像进行图像预处理;performing image preprocessing on each of the detected images;
    将进行所述图像预处理后的所述检测图像进行图像配准获得一区域图像;performing image registration on the detection image after the image preprocessing to obtain an area image;
    对所述区域图像进行图像融合以获得所述融合图像。Perform image fusion on the region image to obtain the fusion image.
  3. 根据权利要求2所述的城市植被种类识别方法,其特征在于,所述对每一所述检测图像进行图像预处理包括:The method for identifying urban vegetation types according to claim 2, wherein the performing image preprocessing on each of the detected images comprises:
    对每一所述检测图像进行图像畸变纠正;performing image distortion correction on each of the detected images;
    对所述检测图像进行图像平滑;performing image smoothing on the detected image;
    对经过图像平滑后的所述检测图像进行图像增强。Image enhancement is performed on the detected image after image smoothing.
  4. 根据权利要求1所述的城市植被种类识别方法,其特征在于,所述类型系数的计算公式为:The method for identifying urban vegetation types according to claim 1, wherein the calculation formula of the type coefficient is:
    Figure PCTCN2021115177-appb-100001
    Figure PCTCN2021115177-appb-100001
    Figure PCTCN2021115177-appb-100002
    Figure PCTCN2021115177-appb-100002
    其中:in:
    d ni为类型系数; d ni is the type coefficient;
    a i为类型颜色特征向量; a i is the type color feature vector;
    b i为类型纹理特征向量; b i is the type texture feature vector;
    c n为融合颜色特征向量; c n is the fusion color feature vector;
    t n为融合纹理特征向量; t n is the fusion texture feature vector;
    i为植被类型,i为整数。i is the vegetation type, and i is an integer.
  5. 根据权利要求1所述的城市植被种类识别方法,其特征在于,The method for identifying urban vegetation types according to claim 1, wherein,
    所述类型颜色特征向量以及所述融合颜色特征向量均为256维向量;The type color feature vector and the fusion color feature vector are both 256-dimensional vectors;
    所述类型纹理特征向量以及所述融合纹理特征向量均为3维向量。The type texture feature vector and the fusion texture feature vector are both 3-dimensional vectors.
  6. 根据权利要求1所述的城市植被种类识别方法,其特征在于,所述对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量还包括:The method for identifying urban vegetation types according to claim 1, wherein the feature extraction of the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image further comprises:
    将所述融合图像由RGB颜色空间转换为HSV颜色空间;converting the fused image from the RGB color space to the HSV color space;
    对所述HSV颜色空间的所述融合图像进行特征提取获得所述融合颜色特征向量和所述融合纹理特征向量。Perform feature extraction on the fused image in the HSV color space to obtain the fused color feature vector and the fused texture feature vector.
  7. 一种城市植被种类识别方法,其特征在于,包括:A method for identifying urban vegetation types, comprising:
    于一检测区域上方,通过一无人机采集所述检测区域的多个检测图像,其中,所述检测图像对应于所述检测区域的局部;Above a detection area, a plurality of detection images of the detection area are collected by a drone, wherein the detection images correspond to parts of the detection area;
    所述无人机将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像;The UAV performs image splicing on the plurality of detection images to obtain a fusion image of the detection area;
    所述无人机对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量;The UAV performs feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
    所述无人机基于所述融合颜色特征向量、所述融合纹理特征向量、与所述融合颜色特征向量对应的多个类型颜色特征向量以及与所述融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数;The UAV is based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture features corresponding to the fused texture feature vector. vector to obtain multiple type coefficients;
    所述无人机输出所述多个类型系数中的最小值所对应的植被类型。The UAV outputs the vegetation type corresponding to the minimum value among the plurality of type coefficients.
  8. 一种城市植被种类识别系统,其特征在于,包括:An urban vegetation type identification system, characterized in that it includes:
    采集模块,采集一检测区域的多个检测图像,其中,所述检测图像对应于所述检测区域的局部;a collection module, collecting a plurality of detection images of a detection area, wherein the detection images correspond to a part of the detection area;
    融合模块,将所述多个检测图像进行图像拼接,获得一所述检测区域的融合图像;a fusion module, performing image splicing on the plurality of detection images to obtain a fusion image of the detection area;
    提取模块,对所述融合图像进行特征提取获得所述融合图像的融合颜色特征向量和融合纹理特征向量;an extraction module, which performs feature extraction on the fused image to obtain a fused color feature vector and a fused texture feature vector of the fused image;
    系数获取模块,基于所述融合颜色特征向量、所述融合纹理特征向量、与所述融合颜色特征向量对应的多个类型颜色特征向量以及与所述融合纹理特征向量对应的多个类型纹理特征向量,获得多个类型系数;A coefficient acquisition module, based on the fused color feature vector, the fused texture feature vector, multiple types of color feature vectors corresponding to the fused color feature vector, and multiple types of texture feature vectors corresponding to the fused texture feature vector , to obtain multiple type coefficients;
    输出模块,输出所述多个类型系数中的最小值所对应的植被类型。The output module outputs the vegetation type corresponding to the minimum value among the plurality of type coefficients.
  9. 一种城市植被种类识别设备,其特征在于,包括:An urban vegetation type identification device, characterized in that it includes:
    处理器;processor;
    存储器,其中存储有所述处理器的可执行指令;a memory in which executable instructions for the processor are stored;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1-8中任意一项所述城市植被种类识别方法的步骤。Wherein, the processor is configured to execute the steps of the urban vegetation type identification method according to any one of claims 1-8 by executing the executable instructions.
  10. 一种计算机可读存储介质,用于存储程序,其特征在于,所述程序被执行时实现权利要求1-8中任意一项所述城市植被种类识别方法的步骤。A computer-readable storage medium for storing a program, characterized in that, when the program is executed, the steps of the method for identifying urban vegetation types according to any one of claims 1-8 are implemented.
PCT/CN2021/115177 2020-11-09 2021-08-28 Urban vegetation type identification method and system, and device and medium WO2022095570A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202022137350 2020-11-09
CN202022137350.1 2020-11-09

Publications (1)

Publication Number Publication Date
WO2022095570A1 true WO2022095570A1 (en) 2022-05-12

Family

ID=81457493

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115177 WO2022095570A1 (en) 2020-11-09 2021-08-28 Urban vegetation type identification method and system, and device and medium

Country Status (1)

Country Link
WO (1) WO2022095570A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514456A (en) * 2013-06-30 2014-01-15 安科智慧城市技术(中国)有限公司 Image classification method and device based on compressed sensing multi-core learning
US20150011194A1 (en) * 2009-08-17 2015-01-08 Digimarc Corporation Methods and systems for image or audio recognition processing
CN109697475A (en) * 2019-01-17 2019-04-30 中国地质大学(北京) A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method
CN112329649A (en) * 2020-11-09 2021-02-05 上海圣之尧智能科技有限公司 Urban vegetation type identification method, system, equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150011194A1 (en) * 2009-08-17 2015-01-08 Digimarc Corporation Methods and systems for image or audio recognition processing
CN103514456A (en) * 2013-06-30 2014-01-15 安科智慧城市技术(中国)有限公司 Image classification method and device based on compressed sensing multi-core learning
CN109697475A (en) * 2019-01-17 2019-04-30 中国地质大学(北京) A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method
CN112329649A (en) * 2020-11-09 2021-02-05 上海圣之尧智能科技有限公司 Urban vegetation type identification method, system, equipment and medium

Similar Documents

Publication Publication Date Title
JP7135125B2 (en) Near-infrared image generation method, near-infrared image generation device, generation network training method, generation network training device, electronic device, storage medium, and computer program
CN110427917B (en) Method and device for detecting key points
US9129435B2 (en) Method for creating 3-D models by stitching multiple partial 3-D models
CN111862224B (en) Method and device for determining external parameters between camera and laser radar
EP3308323B1 (en) Method for reconstructing 3d scene as 3d model
WO2019169884A1 (en) Image saliency detection method and device based on depth information
CN110648283A (en) Image splicing method and device, electronic equipment and computer readable storage medium
CN108230281B (en) Remote sensing image processing method and device and electronic equipment
CN102202159B (en) Digital splicing method for unmanned aerial photographic photos
CN108648140A (en) Image split-joint method, system, equipment and storage medium
CN112329649A (en) Urban vegetation type identification method, system, equipment and medium
CN114758337A (en) Semantic instance reconstruction method, device, equipment and medium
CN115984856A (en) Training method of document image correction model and document image correction method
WO2022095570A1 (en) Urban vegetation type identification method and system, and device and medium
CN105184736B (en) A kind of method of the image registration of narrow overlapping double-view field hyperspectral imager
WO2021056297A1 (en) Image processing method and device, unmanned aerial vehicle, system and storage medium
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
US11551379B2 (en) Learning template representation libraries
US11481998B2 (en) Building footprint generation by using clean mask generation and received image data
CA3142001C (en) Spherical image based registration and self-localization for onsite and offsite viewing
CN113674331A (en) Image alignment method and apparatus, electronic device, and computer-readable storage medium
CN114359425A (en) Method and device for generating ortho image, and method and device for generating ortho exponential graph
CN112991388A (en) Line segment feature tracking method based on optical flow tracking prediction and convex geometric distance
JP2009295062A (en) Image processing apparatus, image processing method and image processing program
CN117671007B (en) Displacement monitoring method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21888269

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.10.2023)