WO2021036267A1 - 一种图像检测方法及相关设备 - Google Patents

一种图像检测方法及相关设备 Download PDF

Info

Publication number
WO2021036267A1
WO2021036267A1 PCT/CN2020/083507 CN2020083507W WO2021036267A1 WO 2021036267 A1 WO2021036267 A1 WO 2021036267A1 CN 2020083507 W CN2020083507 W CN 2020083507W WO 2021036267 A1 WO2021036267 A1 WO 2021036267A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
target
detection
hyperspectral
Prior art date
Application number
PCT/CN2020/083507
Other languages
English (en)
French (fr)
Inventor
张钧萍
汪鹏程
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021036267A1 publication Critical patent/WO2021036267A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Definitions

  • This application relates to the field of image processing technology, and in particular to an image detection method and related equipment.
  • the car number plate is a legal certificate that allows cars to drive on the road, and is a certificate for road traffic management departments, social security management departments and the general public to supervise the driving conditions, identification, memory and search of cars.
  • the traditional images of these fake license plates are difficult to detect abnormalities and deceive the electronic eye monitoring.
  • Some fake license plates are difficult to distinguish by the human eye. Finally, Escape the sanctions of the law.
  • the recognition of true and false license plates first requires the system to be able to obtain the image of the license plate and the location information of the license plate.
  • the traditional scheme uses a common RGB camera to shoot visible light images containing vehicles and license plates, and then uses image processing or machine learning methods to locate the position of the license plate.
  • the license plate detection algorithm extracts the shape characteristics of the license plate, the license plate and the movement characteristics of the vehicle to determine whether the current license plate is consistent with the characteristics of the real license plate, thereby identifying Fake license plates.
  • the embodiments of the present application provide an image detection method and related equipment to effectively distinguish between real license plates and forged license plates in license plate images.
  • an image detection method including:
  • the target image is an image taken by supplementing light according to a specific waveband, and the target image includes a detection area; wherein the spectral reflectance of the first material and the second material in the specific waveband is The difference is greater than the first threshold; according to the target pixel in the detection area, detect whether there is a target area in the target image; the target pixel includes: the gray value is not within the first preset range of pixels, or gray Pixels with a degree value within a second preset range, where the first preset range includes the gray value range of the first material, and the second preset range includes the gray value range of the second material ; When it is detected that the target area exists, a first detection result is generated; when it is detected that the target area does not exist, a second detection result is generated.
  • a target image is first acquired.
  • the target image is an image taken by supplementing light according to a specific wavelength band. Due to the difference in the spectral reflectance of the first material and the second material in the specific wavelength band Greater than the first threshold, the pixel gray values of different materials in the target image are also different.
  • the first material can be a real license plate material
  • the second material can be a non-real license plate material, such as metal, plastic, tape, etc., according to the target
  • the target pixel points in the detection area in the image can detect whether there is a target area in the detection area, that is, whether there is an area in the license plate image to be detected that is not within the gray value range of the real license plate material, or is in the non-real license plate material. For the area within the gray value range, if yes, then generate the first detection result, if not, then generate the second detection result. This way of displaying the difference between the first material and the second material on the target image can effectively distinguish the real license plate from the forged license plate.
  • the method before acquiring the target image, further includes: collecting hyperspectral data of the first material and hyperspectral data of the second material by a hyperspectral camera; according to the first material The hyperspectral data of and the hyperspectral data of the second material determine the range of the specific waveband.
  • the hyperspectral camera collects the hyperspectral data of the first material and the second material to determine the wavelength range with the largest difference in spectral reflectance, which can accurately obtain the specific wavelength band, so that the target image can be effectively distinguished in the subsequent detection process.
  • Real license plate materials and forged license plate materials in.
  • the range of the specific wavelength band includes 550 nm to 700 nm.
  • the wave band in this range is the wave band where the spectral reflectance difference between real license plate material and fake license plate material is the largest. Using the wave band in this range for supplementary light shooting can effectively distinguish the real license plate from the fake license plate in the target image.
  • detecting whether there is a target area in the detection area includes: extracting the detection area from the target image; if the target pixel of the detection area is If the coverage area is greater than the second threshold, it is detected that there is a target area in the detection area; or if the overlap area of the area covered by the target pixel point of the detection area and the character area in the detection area is greater than the third threshold , It is detected that there is a target area in the detection area.
  • the embodiment of this application determines whether there is a target area in the detection area by determining the target pixel in the detection area.
  • the judgment method can be the area covered by the detection target pixel or the range of the area covered by the detection target pixel and the range of the character area in the detection area. Overlapping area, this method is convenient and low-cost to realize the distinction between real license plates and fake license plates.
  • the target image includes a vehicle image
  • the detection area includes a license plate area in the vehicle image, or a part of the license plate image
  • the first material is a real license plate material
  • the second material is a non-real license plate material.
  • the image detection method may specifically be a method for detecting the authenticity of a license plate in a vehicle image.
  • the first detection result describes that the license plate area is an unreal license plate; the second detection result describes that the license plate area is a real license plate. This method is convenient and low-cost to realize the distinction between real license plates and non-real license plates.
  • an embodiment of the present application provides an image detection method, including: acquiring hyperspectral feature information of an image of a first material, and acquiring hyperspectral feature information of a target image, the hyperspectral feature information describing the image Spectral information, the target image includes a detection area; the hyperspectral feature information of the image of the first material is used to detect whether there is a target area in the detection area, and the target area includes: An area where the hyperspectral feature information does not match; when it is detected that the target area exists, a first detection result is generated; when it is detected that the target area does not exist, a second detection result is generated.
  • the embodiment of the application first obtains the hyperspectral feature information of the image of the first material and the hyperspectral feature information of the target image.
  • the first material may be a real license plate material, and then uses the acquired image of the real license plate material Check whether there is a target area in the detection area by using the hyperspectral feature information, that is, whether there is an area in the license plate image to be detected that does not match the hyperspectral feature information of the real license plate material. If there is, the first detection result is generated. , The second detection result is generated.
  • the hyperspectral data of the real license plate is used to compare and detect the license plate to be detected, avoiding the drawbacks of traditional image detection that can only be identified by features, and using the different spectra of different substances in the hyperspectral image Advantages, effectively distinguishing real license plates from fake license plates.
  • the method before detecting whether there is a target area in the detection area by using the hyperspectral feature information of the image of the first material, the method further includes: acquiring hyperspectral feature information of the image of the second material Using the hyperspectral feature information of the image of the first material to detect whether there is a target area in the detection area includes: using the hyperspectral feature information of the image of the first material and the second material to detect the detection Whether the target area exists in the area.
  • the hyperspectral feature information of the image of the first material and the second material can be obtained.
  • the first material can be a real license plate material
  • the second material can be an unreal license plate material.
  • the acquiring the hyperspectral feature information of the image of the first material includes: selecting the target pixel point of the image of the first material according to the hyperspectral data of the image of the first material The average value of the spectrum forms a first spectrum matrix, and the hyperspectral feature information of the image of the first material includes the first spectrum matrix; the detection area is detected by using the hyperspectral feature information of the image of the first material Whether there is a target area includes: performing orthogonal projection of the hyperspectral data of the detection area on the first spectral matrix to obtain a detection result of abundance estimation, and the detection result of the abundance estimation includes each detection area in the detection area.
  • Each pixel is the description of the probability of the first material; if the area covered by the target pixel is greater than the second threshold, it is detected that there is a target area in the detection area; or, if the area covered by the target pixel is If the overlap area with the character area in the detection area is greater than the third threshold, it is detected that there is a target area in the detection area; the target pixel includes the abundance estimation detection result and the probability meets the preset condition Of pixels.
  • the hyperspectral data of the image of the first material is first obtained, and the spectral matrix of the first material is formed by the hyperspectral data, and then the orthogonal projection is performed on the first spectral matrix to determine each of the detection areas
  • the probability that the pixel is the first material can determine the pixel in the detection area that is not the first material, and then detect the area covered by the pixel that is not the first material or the overlap area with the character area in the detection area Size to determine whether it is the target area.
  • the acquiring hyperspectral feature information of the image of the first material includes: selecting target pixels in the image of the first material according to the hyperspectral data of the image of the first material
  • the spectral mean value of the image of the second material comprises the first spectral matrix
  • the hyperspectral characteristic information of the image of the first material includes the first spectral matrix
  • the acquiring the hyperspectral characteristic information of the image of the second material includes: according to the second The hyperspectral data of the image of the material, the spectral mean value of the target pixel in the image of the second material is selected to form a second spectral matrix
  • the hyperspectral characteristic information of the image of the second material includes the second spectral matrix
  • the use of the hyperspectral feature information of the images of the first material and the second material to detect whether there is a target area in the detection area includes: combining the first spectral matrix and the second spectral matrix Endmember matrix; orthogonally project the hyperspectral data of the detection area on the endmember matrix to obtain a detection result of
  • the hyperspectral data of the image of the first material and the second material are first acquired, and the spectral matrix of the first material and the spectral matrix of the second material are formed by the hyperspectral data, and then the spectral matrix of the first material and the spectral matrix of the second material are formed by using the Orthogonal projection is performed on the end member matrix composed of the second spectrum matrix to determine the probability that each pixel in the detection area is the first material or the second material, and the pixels that are not the first material in the detection area can be determined Dots or pixels of the second material, and then by detecting the size of the area covered by these pixels or the overlap area of the character area in the detection area, to determine whether it is the target area, effectively distinguishing the first material in the detection area And the second material, and detect whether the target object exists in the target area.
  • an image detection device including:
  • the first acquisition unit is configured to acquire a target image, the target image is an image captured by supplementing light according to a specific wavelength band, and the target image includes a detection area; wherein the first material and the second material are in the specific wavelength range.
  • the difference in spectral reflectance in the band is greater than the first threshold;
  • the first detection unit detects whether there is a target area in the detection area according to the target pixel in the detection area;
  • the target pixel includes: a pixel with a gray value that is not within a first preset range, or gray Pixel points with a value within a second preset range, where the first preset range includes the gray value range of the first material, and the second preset range includes the gray value range of the second material;
  • the first generating unit is configured to generate a first detection result when it is detected that the target area exists; when it is detected that the target area does not exist, a second detection result is generated.
  • the device further includes: an acquisition unit for acquiring hyperspectral data of the first material and hyperspectral data of the second material through a hyperspectral camera before acquiring the target image Data; a determining unit for determining the range of the specific waveband according to the hyperspectral data of the first material and the hyperspectral data of the second material.
  • the range of the specific wavelength band includes 550 nm to 700 nm.
  • the first detection unit specifically includes: an extraction unit, configured to extract a detection area from the target image; and the first detection unit, further configured to determine the target of the detection area If the area covered by the pixel point is greater than the second threshold, it is detected that there is a target area in the detection area; or if the area covered by the target pixel point in the detection area and the character area in the detection area have an overlapping area greater than the first With three thresholds, it is detected that there is a target area in the detection area.
  • an image detection device including:
  • the second acquiring unit is configured to acquire hyperspectral feature information of the image of the first material and acquire hyperspectral feature information of the target image, the hyperspectral feature information describing the spectral information of the image, and the target image includes a detection area ;
  • the second detection unit is configured to use the hyperspectral feature information of the image of the first material to detect whether there is a target area in the detection area, and the target area includes: different from the hyperspectral feature information of the first material Matching area
  • the second generating unit is configured to generate a first detection result when it is detected that the target area exists; when it is detected that the target area does not exist, a second detection result is generated.
  • the device further includes: the second acquiring unit is further configured to detect whether there is a target in the detection area using the hyperspectral feature information of the image of the first material Before the region, obtain the hyperspectral feature information of the image of the second material;
  • the second detection unit is further configured to use the hyperspectral feature information of the images of the first material and the second material to detect whether there is a target area in the detection area.
  • the device further includes: a composition unit configured to select, according to the hyperspectral data of the image of the first material, the spectral mean value of the target pixel point of the image of the first material, to form The first spectral matrix, the hyperspectral characteristic information of the image of the first material includes the first spectral matrix;
  • the second detection unit includes: an orthogonal projection unit, which is used to place the hyperspectral data of the detection area in Orthogonal projection is performed on the first spectral matrix to obtain a detection result of abundance estimation, and the detection result of abundance estimation includes a description of the probability that each pixel in the detection area is the first material;
  • the second detection unit is further configured to detect that there is a target area in the detection area if the area covered by the target pixel is greater than the second threshold; or, if the area covered by the target pixel is in the same area as the detection area If the overlapping area of the character area is greater than the third threshold, it is detected that there is a target area in the detection area; the target pixel
  • the device further includes: the constituent unit is further configured to select the target pixel in the image of the first material according to the hyperspectral data of the image of the first material
  • the average value of the spectrum constitutes a first spectrum matrix, and the hyperspectral characteristic information of the image of the first material includes the first spectrum matrix;
  • the constituent unit is also used for the hyperspectral data of the image of the second material,
  • the spectral mean value of the target pixel in the image of the second material is selected to form a second spectral matrix, and the hyperspectral characteristic information of the image of the second material includes the second spectral matrix.
  • the second detection unit includes: the composition unit, which is further used to form the end member matrix by the first spectral matrix and the second spectral matrix; the orthogonal projection unit is also used to combine the The hyperspectral data of the detection region is orthogonally projected on the end member matrix to obtain a detection result of abundance estimation, and the detection result of abundance estimation includes that each pixel in the detection region is the first material
  • the second detection unit is also used to detect that there is a target area in the detection area if the area covered by the target pixel is greater than the second threshold; or, if the area covered by the target pixel is If the overlapping area between the range and the character area range in the detection area is greater than the third threshold, it is detected that there is a target area in the detection area; the target pixel includes the abundance estimation detection result and the probability meets the preset Conditional pixels.
  • an embodiment of the present application provides a terminal device, which includes a processor, and the processor is configured to support the terminal device to implement corresponding functions in the image detection method provided in the first aspect.
  • the terminal device may further include a memory, which is used for coupling with the processor, and stores the necessary program instructions and data of the terminal device.
  • the terminal device may also include a communication interface for the network device to communicate with other devices or a communication network.
  • an embodiment of the present application provides a camera, which includes a fill light and a camera module, wherein the fill light is used to generate compensation light of a specific wavelength band, wherein the first material and the second material are in The difference of the spectral reflectance in the specific wavelength band is greater than a first threshold; the camera module is used for shooting and acquiring a target image based on the specific wavelength band.
  • the camera may also include a memory, which is used for coupling with the processor, and stores the necessary program instructions and data of the camera.
  • the camera may also include a communication interface for communicating with other devices or a communication network.
  • an embodiment of the present application provides a computer-readable storage medium for storing computer software instructions used by the image detection device provided in the third or fourth aspect, which includes instructions for executing the above-mentioned aspects. Designed procedures.
  • the embodiments of the present application provide a computer program, the computer program includes instructions, when the computer program is executed by a computer, the computer can execute the image detection device in the third or fourth aspect. Process.
  • the present application provides a chip system including a processor for supporting electronic devices to implement the functions involved in the above-mentioned first or second aspect, for example, generating or processing the above-mentioned image detection method The information involved.
  • the chip system further includes a memory, and the memory is used to store program instructions and data necessary for the data sending device.
  • the chip system can be composed of chips, or include chips and other discrete devices.
  • FIG. 1 is a schematic diagram of the system architecture of an image detection method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of an image detection method provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of an application scenario for acquiring a target image provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of the spectral difference of different materials provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an application scenario of a target image detection result provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another image detection method provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image detection device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another image detection device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the hardware structure of a camera provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a communication chip provided by an embodiment of the application.
  • component used in this specification are used to denote computer-related entities, hardware, firmware, a combination of hardware and software, software, or software in execution.
  • the component may be, but is not limited to, a process, a processor, an object, an executable file, an execution thread, a program, and/or a computer running on a processor.
  • the application running on the computing device and the computing device can be components.
  • One or more components may reside in processes and/or threads of execution, and components may be located on one computer and/or distributed among two or more computers.
  • these components can be executed from various computer readable media having various data structures stored thereon.
  • the component can be based on, for example, a signal having one or more data packets (e.g. data from two components interacting with another component in a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through a signal) Communicate through local and/or remote processes.
  • a signal having one or more data packets (e.g. data from two components interacting with another component in a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through a signal) Communicate through local and/or remote processes.
  • Hyperspectral images Spectral images with a spectral resolution within the order of 10l are called hyperspectral images.
  • the so-called hyperspectral images are meticulously segmented in the spectral dimension, not just the traditional so-called black, white or red
  • the difference between green, green and blue is that there are also N channels in the spectral dimension. For example, we can divide 400nm-1000nm into 300 channels. Therefore, what is obtained by the hyperspectral device is a data cube, which not only has image information, but also expands in the spectral dimension. As a result, not only the spectral data of each point on the image can be obtained, but also the image of any spectrum can be obtained. information.
  • the image information of the hyperspectral image collection sample is combined with the spectral information.
  • Image information can reflect the size, shape, defect and other external quality characteristics of the sample. Since different components have different spectral absorption, the image will reflect a certain defect significantly at a certain wavelength, and the spectral information can fully reflect the sample Differences in internal physical structure and chemical composition.
  • RGB color mode is a color standard in the industry. It changes the three color channels of red (R), green (G), and blue (B) and their mutual A variety of colors can be obtained by superimposing between them. RGB represents the colors of the three channels of red, green, and blue. This standard includes almost all colors that human vision can perceive, and it is currently the most widely used color. One of the systems.
  • HSV Hue, Saturation, Value
  • HSV is a color space created based on the intuitive characteristics of colors, also known as Hexcone Model.
  • the color parameters in this model are: hue (H), saturation (S), and brightness (V).
  • hue H measured by an angle, the value range is 0° ⁇ 360°, starting from red and counting in a counterclockwise direction, red is 0°, green is 120°, and blue is 240°.
  • Their complementary colors are: yellow is 60°, cyan is 180°, and magenta is 300°; saturation S: indicates how close the color is to the spectral color. A color can be seen as the result of mixing a certain spectral color with white.
  • the greater the proportion of the spectral color the higher the degree of color close to the spectral color, and the higher the saturation of the color.
  • the saturation is high, and the color is deep and bright.
  • the white light component of the spectral color is 0, and the saturation is the highest.
  • the value range is from 0% to 100%.
  • Brightness V indicates the brightness of the color.
  • the brightness value is related to the brightness of the luminous body; for the object color, this value is related to the object color.
  • the transmittance or reflectance is related. Usually the value ranges from 0% (black) to 100% (white).
  • Endmember It is equivalent to a sub-pixel in a pixel point, which only contains the spectral information of one type of ground feature, which can be extracted according to the resolution of multi-spectrum or hyper-spectrum. Assuming there are 2 pixels, one of the pixels has three types of features, A, B, and C, then the pixel is called a mixed pixel; the other pixel has only a single feature, then the pixel A point is called a pure pixel point, which can be used as an end member.
  • Endmembers contain only one type of feature information. Generally, pixels are mixed pixels, including multiple types of features. When decomposing mixed pixels, you can analyze the number of features included in a pixel. Quantitative description of the kinds of endmembers, the area percentage of several kinds of endmembers in each pixel point in the pixel point, that is, the abundance of the endmembers.
  • the following specifically analyzes the technical problems to be solved in the embodiments of the present application and the corresponding application scenarios.
  • the license plate detection algorithm extracts the shape characteristics of the license plate, the license plate and the movement characteristics of the vehicle to determine whether the current license plate is consistent with the characteristics of the real license plate, thereby identifying Fake license plates.
  • the current typical methods mainly use information such as the shape and outline of the license plate. Therefore, as long as the license plate sticker, plastic number, magnetic metal sheet and other fake numbers with the same color as the real license plate are used, replace or If one or several characters in the real license plate are blocked and the edges are seamlessly connected, these traditional methods basically fail. If there is no human intervention and close observation, there is almost no possibility of finding a fake license plate.
  • the main problem solved by this application is how to effectively distinguish whether the license plate to be detected is a forged license plate that is concealed by other materials, and after identifying it as a forged license plate, provide a specific location where the license plate to be detected is blocked.
  • FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • the system architecture in this application may include the target vehicle 101, the camera device 102, and one or more electronic devices 103 in FIG. 1, wherein the camera device 102 and the electronic device 103 can communicate through wired or wireless or other communication methods.
  • the imaging device 102 is used to photograph the target vehicle 101, and after acquiring the image of the target vehicle 101, it is sent to the electronic device 103. among them,
  • the camera device 102 can be a device that completes image decomposition and photoelectric signal conversion, including infrared cameras, black-and-white cameras, hyperspectral cameras, etc.
  • Image decomposition is to decompose a complete image into several independent pixels (the smallest unit that constitutes a TV image screen). )the process of. Generally speaking, the greater the number of pixels, the clearer the image. Each pixel is represented by only a single color and brightness.
  • the imaging device can convert the light signal of each pixel in the image into a corresponding electrical signal, and then transmit it to the output terminal in a certain order.
  • the electronic device 103 may be a communication terminal, a mobile device, a user terminal, a mobile terminal, a wireless communication device, a portable terminal, a user agent, a user device, a service device, or a user equipment (User Equipment, UE) and other electronic devices, which are mainly used for data processing.
  • the input and output or display of processing results can also be software clients, applications, etc. installed or running on any of the above-mentioned devices.
  • the terminal can be a mobile phone, a cordless phone, a smart watch, a wearable device, a tablet device, a handheld device with wireless communication function, a computing device, an in-vehicle communication module, a smart meter, or other processing devices connected to a wireless modem.
  • the electronic device 103 receives the vehicle image or image data sent by the camera device 102, and can detect the precise position of the license plate in the image by using methods based on hyperspectral color distribution or intelligent detection algorithms, and then use The difference between the spectral information of the true and false license plates, determine whether the license plate in the image is a real license plate, if there is occlusion, further determine the location of the occlusion, and then output the detection result to other display devices or terminal devices; when the electronic device 103 is a terminal device , The electronic device 103 receives the image or image data sent by the camera device 102, and can detect the precise position of the license plate in the image by using methods based on hyperspectral color distribution or intelligent detection algorithms, and then use the difference in spectral information between true and false license plates, It is determined whether the license plate in the image is a real license plate, and if there is a block, the block position is further determined, and then the detection result can be output to the display device of the
  • FIG. 1 is only an exemplary implementation in the embodiment of the present application, and the system architecture in the embodiment of the present application includes but is not limited to the above system architecture.
  • FIG. 2 is an image detection method provided by an embodiment of the present application. The method includes but is not limited to the following steps:
  • Step S201 Obtain a target image.
  • the electronic device acquires a target image, the target image is an image taken by supplementing light according to a specific wavelength band, and the target image includes a detection area; wherein the spectral reflectance of the first material and the second material in the specific wavelength band is If the difference is greater than the first threshold, the electronic device can obtain the target image from the camera device with the light supplement function, or directly acquire the target image by using its own camera device with the light supplement function.
  • the camera device can be a black and white camera connected to the electronic device in a wired or wireless manner. The black and white camera fills in light through a specific wavelength to capture a black and white image of the target vehicle.
  • the target object image area in the black and white image is clear and complete . Then output to the electronic device, and the electronic device obtains the black and white image of the target image. Since the difference in the spectral reflectance of the first material and the second material in the specific wavelength band is greater than the first threshold, the image gray values of the first material and the second material in the target image acquired by the imaging device are relatively different.
  • FIG. 3 is the acquired target image
  • the target image includes target vehicle 1, target vehicle 2, and target vehicle 3, as well as the license plate area (ie detection area) of the target vehicle,
  • the license plate area is a clear and complete area.
  • the first material can be the blue plate material of the real license plate and the white plate material of the real license plate
  • the second The material can be camouflage metal material, camouflage sticker material, camouflage tape material, etc.
  • the specific band is the real license plate blue plate material, the real license plate white plate material and the camouflage metal material, the camouflage sticker material, the camouflage tape material are in the same band
  • the spectral reflectance difference on the upper limit is greater than the first threshold, where the spectral reflectance difference is greater than the first threshold can be that the spectral reflectance of the blue plate material of the real license plate is in the specific band compared with the camouflage metal material, camouflage sticker material,
  • the average value of the difference between the spectral reflectance of the camouflage tape material is greater than the first threshold, and the spectral reflectance of the white plate material of the real license plate is in the specific band and the spectral reflectance of
  • the spectral reflectance of the blue plate material of the real license plate is in the specific band compared with the camouflage metal material, camouflage sticker material, camouflage
  • the sum of the values of the difference in the spectral reflectance of the tape materials is greater than the first threshold, and the spectral reflectance of the white plate material of the real license plate is within the specific band compared with the spectral reflectance of the camouflage metal material, camouflage sticker material, and camouflage tape material.
  • the sum of the difference values is greater than the first threshold, and there is no restriction here; at this time, the target image should be able to clearly distinguish the first material (the real license plate blue hole plate material, the real license plate white plate material) and the second material (camouflage metal) Material, camouflage sticker material, camouflage tape material) the difference in gray value of pixels.
  • the hyperspectral data of the first material and the hyperspectral data of the second material are collected by the hyperspectral camera, and the hyperspectral data of the first material and the second material are used to determine
  • the hyperspectral feature information describes the spectral information and image information of the first material and the second material.
  • the hyperspectral feature information includes the average spectral reflectance and spectral mean value of the first material and the second material in different wavebands. , Spectral matrix, average spectral irradiance, resolution and other information.
  • the first material It is a real license plate material
  • the second material is a non-real license plate material including metal, stickers, tape and other materials, as shown in Figure 4, which is the blue plate material of the real license plate, the white plate material of the real license plate, and the camouflage metal material.
  • the spectral curve of the camouflage sticker material and the camouflage tape material is the hyperspectral characteristic information of the first material and the second material, because the spectral reflectance difference of the first material and the second material in a specific band is greater than the first threshold
  • the specific wavelength band may be around 500 nm, for example, between 550 nm and 700 nm.
  • Step S202 Extract a detection area from the target image.
  • the electronic device After the electronic device obtains the target image, it extracts the detection area from the target image. For example, if the target image is the image of the target vehicle, the detection area is the license plate image area in the target vehicle. After filling in the image of the target vehicle taken by light, the license plate detection algorithm is used to extract the image area of the license plate to be detected from the image of the target vehicle.
  • Step S203 Detect whether there is a target area in the detection area.
  • the electronic device extracts the detection area from the target image, it detects whether there is a target area in the detection area.
  • the electronic device traverses all the pixels in the detection area, and removes all the gray values in the detection area from the first preset range. Pixels or pixels with gray values in the second preset range are determined as target pixels, where the first preset range includes the gray value range of the first material, and the second preset range includes the gray value of the second material.
  • the degree value range, the range covered by the target pixel includes the target area, and the target area is determined according to the target pixel, that is, the area composed of all target pixels can be the target area.
  • the detection area is the license plate image area in the target vehicle, and all pixels in the license plate image area are traversed (or pixels within a certain range are selected), because the license plate image area is obtained by using a specific waveband to fill light If the spectral reflectance of the real license plate material and the non-real license plate material in the specific band is greater than the first threshold, the pixel between the real license plate material and the non-real license plate material in the license plate image area The gray values of the dots are different.
  • the first preset range and the second preset range are set according to the difference in the gray values of the pixels between different materials.
  • the first preset range is the gray level of the image of the real license plate material Value range
  • the second preset range is the gray value range of the image of the non-real license plate material
  • the pixels in the license plate image area whose gray value is not within the first preset range are determined as target pixels
  • the gray value is within
  • the pixels within the second preset range are also determined as target pixels, that is, the area covered by the target pixel is an area of non-real license plate material. That is to say, the target area can be a forged area on the license plate, including covering or replacing a part of the area on the license plate with other materials, for example, replacing the "0" on the license plate with other materials with "8".
  • the following specifically introduces three ways to determine the target area (non-real license plate material) according to the target pixel:
  • Manner 1 If the area covered by the target pixel is greater than the second threshold, it is detected that the target area exists in the detection area.
  • the second threshold may be 50 square centimeters, and all pixels in the license plate image area are detected. If the area covered by the target pixel is greater than 50 square centimeters, it is determined that there is a target area in the license plate image area. Including the area covered by all target pixels, this method avoids that the occluded part of the license plate is small and does not affect the recognition of the white license plate, which leads to the judgment as a forged license plate and improves the accuracy of detection.
  • Manner 2 If the overlapping area of the area covered by the target pixel and the character area in the detection area is greater than the third threshold, it is detected that the target area exists in the detection area.
  • the third threshold is 15 square centimeters, and all pixels in the license plate image area are detected. If the overlap area between the area covered by the target pixel and the white plate area in the license plate image area is greater than 15 square centimeters, it is determined There is a target area in the license plate image area, which is the area covered by all target pixels.
  • This method avoids that the occluder in the license plate only covers the blue bottom of the license plate and does not block the white number plate (does not affect the recognition of white License plate), which leads to the judgment as a forged license plate, which improves the accuracy of detection.
  • Method 3 If the area covered by the target pixel exceeds a certain range, it is detected that the target area exists in the detection area.
  • the range can be 5% of the detection area, and all pixels in the license plate image area are detected. If the area covered by the target pixels accounts for more than 5% of the total area of the license plate image area, it is determined to be in the license plate image There is a target area in the area, and the target area is the area covered by all target pixels.
  • Step S204 Generate a result.
  • the detection area is detected whether there is a target area, when it is detected that the target area exists, a first detection result is generated; when it is detected that the target area does not exist, a second detection result is generated; wherein the first detection result can describe the existence of the detection area
  • the target area and the specific location of the target area, and the second detection result can describe that the target area does not exist in the detection area.
  • the first detection result may be a license plate detection result map as shown in FIG.
  • the target vehicle 1 is forged metal
  • the target vehicle 2 is forged tape
  • the target vehicle 3 is forged plastic.
  • the second detection result can be "there is no forgery Text or image or voice description such as "area” or "the license plate is a real license plate”.
  • the target image is first acquired.
  • the target image is an image taken by supplementing light according to a specific wavelength band. Because the difference in the spectral reflectance of the first material and the second material in the specific wavelength band is greater than The first threshold, the gray value of the pixel points of different materials in the target image is also different.
  • the first material can be a real license plate material
  • the second material can be an unreal license plate material, such as metal, plastic, tape, etc., according to the target image
  • the target pixels in the detection area can detect whether there is a target area in the detection area, that is, whether there is an area in the license plate image to be detected that is not within the gray value range of the real license plate material, or if it is in the gray of the non-real license plate material.
  • FIG. 6 is another image detection method provided by an embodiment of the present application. The method includes but is not limited to the following steps:
  • Step S601 Obtain the hyperspectral feature information of the image of the first material, and obtain the hyperspectral feature information of the target image.
  • the hyperspectral feature information describes the characteristics of the image of the object under the hyperspectrum, and is related to the material and spectral reflectance of the object.
  • the description of hyperspectral characteristic information can be described by the spectral radiation curve, which can also be called the spectral distribution.
  • a hyperspectral image has an increased spectral dimension compared to a normal image. Therefore, the spectral dimension is characteristic information unique to a hyperspectral image relative to a normal image, and the characteristic information can be described by a spectral radiation curve.
  • a certain range for example, 50 ⁇ 50 pixels
  • the average value of the spectral radiation curve of the pixels within this sampling range as the representative of the spectral radiation curve in this range, so that In the subsequent steps, it is compared with the preset spectral radiation curve to determine whether the two match.
  • the electronic device acquires the hyperspectral feature information of the image of the first material and the hyperspectral feature information of the target image.
  • the hyperspectral feature information includes the average spectral reflectance, spectral mean, spectral matrix, radiation value, resolution, etc. in different bands Information;
  • the target image includes a detection area, and the detection area is the image area to be detected.
  • the first material may be a real license plate material
  • the target image may be a target vehicle image including a license plate image area.
  • the electronic device acquires the hyperspectral data of the image of the first material, where the hyperspectral data may include average spectral reflectance, spectral mean, spectral matrix, average spectral irradiance, resolution
  • the hyperspectral data of the image of the first material select the pixel points of the image of the first material, and the spectral mean value of the selected pixel points (or called the mean value of the spectral curve) form the first spectral matrix (spectral mean curve) It can be regarded as a column vector.
  • the column vectors of different types of targets are stacked to form a spectral matrix).
  • the first spectral matrix is the hyperspectral feature information of the image of the first material.
  • the first spectral matrix can be used for subsequent For detection, that is to say, the electronic device may first obtain the hyperspectral data of the image of the first material, and then process the hyperspectral data to obtain the hyperspectral feature information of the image of the first material.
  • the electronic device can also acquire hyperspectral data of the image of the second material, that is, the electronic device acquires the hyperspectral data of the image of the first material and the second material, where the hyperspectral data can be included in different wavelength bands.
  • the first spectral matrix is the hyperspectral feature information of the image of the first material; according to the hyperspectral data of the image of the second material, the pixel points in the image of the second material are selected respectively, and the spectrum of the selected pixel points.
  • the average values form the second spectral matrix, which is the hyperspectral characteristic information of the image of the second material, that is, the electronic device can first obtain the hyperspectral data of the image of the first material and the second material, and then The hyperspectral data is processed to obtain hyperspectral feature information of the images of the first material and the second material.
  • the first material is a real license plate material, for example: a real license plate is composed of a blue hole plate and five white number plates, and the spectral mean value of multiple pixels of the blue hole plate of the real license plate material and five white plates are selected The spectral averages of multiple pixels of the license plate form the first spectral matrix.
  • the second material can be metal, plastic, and tape. These materials can be used as materials to block real license plates or forge real license plates.
  • the spectral mean values of the plastic material are selected to form the metal spectral matrix
  • the spectral mean values of multiple pixels of the plastic material are selected to form the plastic spectral matrix
  • the spectral mean values of multiple pixels of the tape material are selected to form the tape spectral matrix, that is, the second spectral matrix includes the metal spectral matrix, Plastic spectrum matrix and tape spectrum matrix.
  • Step S602 Detect whether there is a target area in the detection area.
  • the electronic device uses the hyperspectral feature information of the image of the first material to detect whether there is a target area in the detection area in the target image
  • the target area includes an area that does not match the hyperspectral feature information of the image of the first material, that is, if the detection area matches the hyperspectral feature information of the image of the first material, the target area does not exist.
  • the detection area is the license plate image area
  • the first material is the real license plate material.
  • the hyperspectral data of the license plate image area is analyzed and matched with the hyperspectral feature information of the image of the first material. If the hyperspectral data of the license plate image area is If there is an area that does not match the hyperspectral feature information of the image of the real license plate material, this area is the target area (non-real license plate area).
  • the hyperspectral data of the detection area is orthogonally projected on the first spectrum matrix of the real license plate material to obtain the detection result of abundance estimation.
  • the detection result of the abundance estimation describes that each pixel in the detection area is The probability of the real license plate material.
  • the pixels whose probability does not meet the preset conditions are judged as target pixels.
  • the detection result of abundance estimation describes the height of each pixel in the license plate image area.
  • the probability P1 that the spectral data matches the first material (real license plate material) the preset condition is that P1 is greater than 0.9, that is, if the probability P1 that matches the real license plate material is less than or equal to 0.9, then the pixel is the target pixel. That is, it does not match the real license plate material.
  • the electronic device uses the hyperspectral feature information of the image of the first material and the second material and the hyperspectral feature information of the target image, it uses the hyperspectral feature information of the image of the first material and the second material, In the detection target image, whether there is a target area in the detection area, the target area includes an area that does not match the hyperspectral feature information of the image of the first material, that is, if the detection area is the same as the hyperspectral image of the first material If the feature information matches, there is no target area. It is understandable that the target area may also include an area that matches the hyperspectral feature information of the image of the second material.
  • the detection area is a license plate image area
  • the first material is a real license plate material
  • the second material can be a material that shields the real license plate or fakes the real license plate.
  • the hyperspectral data of the license plate image area is combined with the first material and the second material.
  • the hyperspectral feature information of the image of the license plate is analyzed and matched. If the hyperspectral data of the license plate image area has an area that does not match the hyperspectral feature information of the image of the real license plate material, it matches the hyperspectral feature information of the image of the second material
  • the area is the target area (non-real license plate area).
  • the first spectral matrix of the first material and the second spectral matrix of the second material are combined to form an end member matrix.
  • the abundance estimation detection result describes the probability that each pixel in the detection area is the first material and/ Or the probability of the second material. According to the detection result of the abundance estimation, the pixel with the probability that does not meet the preset condition is determined as the target pixel. For example, the detection result of the abundance estimation describes each pixel in the license plate image area The probability P1 that the hyperspectral data matches the first material (real license plate material) and the probability P2 that matches the second material (non-real license plate material). The preset conditions are that P1 is greater than 0.8 and P1 is greater than P2.
  • the matching probability P1 of the real license plate material is less than or equal to 0.8 or P1 is less than or equal to the probability P2 matching the non-real license plate material, then the pixel is the target pixel, that is, it does not match the real license plate material.
  • Manner 1 If the area covered by the target pixel is greater than the second threshold, it is detected that the target area exists in the detection area.
  • the second threshold is 50 square centimeters, and all pixels in the license plate image area are detected. If the area covered by the target pixel is greater than 50 square centimeters, it is determined that there is a target area in the license plate image area, and the target area is The area covered by all target pixels. This method avoids that the occluded part of the license plate is small and does not affect the recognition of the white license plate, which leads to the judgment as a forged license plate and improves the accuracy of detection.
  • Manner 2 If the overlapping area of the area covered by the target pixel and the character area in the detection area is greater than the third threshold, it is detected that the target area exists in the detection area.
  • the third threshold is 15 square centimeters, and all pixels in the license plate image area are detected. If the overlap area between the area covered by the target pixel and the white plate area in the license plate image area is greater than 15 square centimeters, it is determined There is a target area in the license plate image area, which is the area covered by all target pixels.
  • This method avoids that the occluder in the license plate only covers the blue bottom of the license plate and does not block the white number plate (does not affect the recognition of white License plate), which leads to the judgment as a forged license plate, which improves the accuracy of detection.
  • Method 3 If the area covered by the target pixel exceeds a certain range, it is detected that the target area exists in the detection area.
  • the range can be 5% of the detection area, and all pixels in the license plate image area are detected. If the area covered by the target pixels accounts for more than 5% of the total area of the license plate image area, it is determined to be in the license plate image There is a target area in the area, and the target area is the area covered by all target pixels.
  • the detection area includes the original vehicle image of the target vehicle 1, the original vehicle image of the target vehicle 2, and the original license plate image of the target vehicle 3.
  • the hyperspectral image of the original license plate image is orthogonally projected through the end member matrix. Then, the target pixels are determined according to the obtained abundance estimation detection results to determine the target area.
  • the license plate detection results in Figure 5 respectively show the detection results when the license plate is forged metal, forged tape, or forged plastic.
  • Step S603 Generate a result.
  • the detection area is detected whether there is a target area, when it is detected that the target area exists, a first detection result is generated; when it is detected that the target area does not exist, a second detection result is generated; wherein the first detection result can describe the existence of the detection area
  • the target area and the specific location of the target area, and the second detection result can describe that the target area does not exist in the detection area.
  • the first detection result may be a license plate detection result map as shown in FIG. 5, which describes the forged position and the forged material of the license plate.
  • target vehicle 1 is forged metal
  • target vehicle 2 is forged tape
  • target vehicle 3 is forged plastic.
  • the second detection result can be "there is no forged area "Or "The license plate is a real license plate” and other text or image or voice description.
  • the embodiment of the application first obtains the hyperspectral feature information of the image of the first material and the hyperspectral feature information of the target image.
  • the first material may be a real license plate material, and then uses the acquired image of the real license plate material Check whether there is a target area in the detection area by using the hyperspectral feature information, that is, whether there is an area in the license plate image to be detected that does not match the hyperspectral feature information of the real license plate material. If there is, the first detection result is generated. , The second detection result is generated.
  • the hyperspectral data of the real license plate is used to compare and detect the license plate to be detected, avoiding the drawbacks of traditional image detection that can only be identified by features, and using the different spectra of different substances in the hyperspectral image Advantages, effectively distinguishing real license plates from fake license plates.
  • the embodiment of the present application further includes a specific implementation step of acquiring the detection area in the target image before detecting whether there is a target area in the detection area, and this step may include:
  • Step S604 Obtain a hyperspectral image of the target image.
  • the electronic device obtains a hyperspectral image of the target image, and the hyperspectral image of the target image includes a detection area, wherein the electronic device can obtain the hyperspectral image of the target image from a camera device with hyperspectral technology, or use itself
  • the imaging equipment with hyperspectral technology directly obtains the hyperspectral image of the target image.
  • a hyperspectral imaging device captures a hyperspectral image of a target vehicle, and if the target object image area in the hyperspectral image is clear and complete, it is output to the electronic device, and the electronic device obtains the hyperspectral image of the target image.
  • Fig. 3 is a hyperspectral image of the target image.
  • the target image includes target vehicle 1, target vehicle 2, and target vehicle 3, as well as the license plate area of the target vehicle. In the image, the license plate area is a clear and complete area.
  • Step S605 Synthesize the RGB image of the target image according to the hyperspectral image of the target image.
  • the electronic device After the electronic device acquires the hyperspectral image of the target image, it extracts the wavelength bands near 700nm, 546nm, and 435nm in the hyperspectral image to synthesize an RGB image.
  • the wavelength bands near 700nm, 546nm, and 435nm are the red, green, and blue bands, respectively. Synthesizing hyperspectral images into RGB images is convenient for image analysis. Due to the complex background data of hyperspectral images, the huge amount of data, and the difficulty of constructing the spectral library, this method can be more convenient and efficient in the process of determining the detection area.
  • Step S606 Extract the detection area in the RGB image.
  • the electronic device synthesizes the RGB image of the target image, it extracts the detection area in the RGB image.
  • it extracts the detection area in the RGB image.
  • the first way is to directly locate the detection area by inputting the RGB image into the license plate detection algorithm, that is, the license plate image area.
  • the second method is to first select the pixel in the RGB image that meets the preset target object color range as a specific pixel, and then determine the detection area according to the specific pixel.
  • the electronic device synthesizes the RGB image according to the hyperspectral data of the target image, it converts the RGB image to the HSV color space and traverses the pixels in the RGB image. If the hue H, saturation S and brightness V of the pixel point are Within the preset target object color range, the pixel is selected as a specific pixel, where the preset target object color range is the color range of the blue pixel of the real license plate, and the hue range of the blue pixel of the real license plate 0.56 ⁇ 0.71, the saturation range is 0.4-1, and the brightness range is 0.3-1.
  • the hue H, saturation S, and brightness V of the pixel are all within the color range of the blue pixel of the real license plate, you can
  • the pixel is considered to be the blue pixel of the real license plate, that is, it is selected as a specific pixel.
  • the range covered by the specific pixels includes the detection area, and the detection area is determined according to the specific pixels, that is, the area composed of all the specific pixels is the detection area.
  • the electronic device traverses all the pixels in the RGB image, selects specific pixels, and verifies whether each connected area formed by the specific pixels meets the preset rectangular specifications, and if so, determines that it meets the preset specifications.
  • the connected area of the rectangular specification is the detection area. For example, first determine the circumscribed rectangle of the connected area according to all connected areas composed of specific pixels, and verify the circumscribed rectangle of all connected areas according to the specific geometric characteristics of the license plate.
  • the aspect ratio of the car license plate is 440:140 , Compare the circumscribed rectangles of all connected areas according to this value, determine the rectangular area that meets the conditions, and output the coordinates of the rectangular area, and locate it in the hyperspectral image of the target image according to the coordinates to extract the license plate to be detected (that is, the detection area ), the original license plate image of target vehicle 1 "Liao B 65PF7", the original license plate image of target vehicle 2 "Guangdong D 6DAF7", and the original license plate image of target vehicle 3 "Liao B 9D243" in Figure 5 are the detection areas.
  • This method selects specific pixels by traversing the pixels of the RGB image of the target image, and obtains the detection area formed by the specific pixels, which can effectively extract the detection area in the RGB image.
  • this method before detecting whether there is a target area in the detection area, first obtain a target image that includes the detection area, and the target image is a hyperspectral image, by performing processing on the pixel points of the RGB image of the target image. Traverse to select specific pixels to obtain the detection area composed of specific pixels. Due to the complex background data of hyperspectral images, the huge amount of data, and the difficulty of constructing the spectral library, this method first obtains the hyperspectral image of the target image, and then obtains the RGB The image is used to determine the detection area in the target image, and the advantages of RGB image in the location of license plates are simple and efficient, and the advantages of different material spectra in hyperspectral images have improved the simplicity and practicability of this method.
  • each network element such as an electronic device, a camera device, etc.
  • each network element includes a hardware structure and/or software module corresponding to each function.
  • this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiments of the present application can divide the functional modules of electronic equipment, camera equipment, etc. according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 7 is a schematic structural diagram of an image detection device 70 provided by an embodiment of the present application.
  • the image detection device 70 may include a first acquiring unit 701, The first detection unit 702 and the first generation unit 703, wherein the detailed description of each unit is as follows:
  • the first acquisition unit 701 is configured to acquire a target image, the target image is an image captured by supplementing light according to a specific wavelength band, and the target image includes a detection area; wherein, the first material and the second material are in the The difference of the spectral reflectance in the specific waveband is greater than the first threshold;
  • the first detection unit 702 detects whether there is a target area in the detection area according to the target pixel in the detection area;
  • the target pixel includes: a pixel with a gray value that is not within a first preset range, or gray Pixels with a degree value within a second preset range, where the first preset range includes the gray value range of the first material, and the second preset range includes the gray value range of the second material ;
  • the first generating unit 703 is configured to generate a first detection result when it is detected that the target area exists; and when it is detected that the target area does not exist, generate a second detection result.
  • the image detection device may further include an acquisition unit 704, where the acquisition unit 704 is configured to collect hyperspectral data of the first material through a hyperspectral camera before the acquisition of the target image And hyperspectral data of the second material; a determining unit, configured to determine the range of the specific waveband according to the hyperspectral data of the first material and the hyperspectral data of the second material.
  • the range of the specific wavelength band includes 550 nm to 700 nm.
  • the first detection unit 702 specifically includes: an extraction unit 705, configured to extract a detection area from the target image; and the first detection unit 702, further configured to: If the area covered by the target pixel point is greater than the second threshold, it is detected that there is a target area in the detection area; or if the area range covered by the target pixel point of the detection area overlaps with the character area range in the detection area If the area is greater than the third threshold, it is detected that there is a target area in the detection area.
  • each unit can also correspond to the corresponding description of the method embodiment shown in FIG. 2, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of another image detection device 80 provided by an embodiment of the present application.
  • the image detection device 80 may include a second acquisition unit 801, a second detection unit 802, and a second generation unit 803. Among them, the detailed description of each unit is as follows:
  • the second acquisition unit 801 is used to acquire hyperspectral feature information of the image of the first material, and acquire hyperspectral feature information of the target image.
  • the hyperspectral feature information describes the spectral information of the image, and the target image includes detection area;
  • the second detection unit 802 is configured to use the hyperspectral feature information of the image of the first material to detect whether there is a target area in the detection area, and the target area includes: the hyperspectral feature information of the first material Mismatched area;
  • the second generating unit 803 is configured to generate a first detection result when it is detected that the target area exists; when it is detected that the target area does not exist, a second detection result is generated.
  • the image detection device 80 further includes: the second acquisition unit 801 is further configured to detect the detection area in the hyperspectral feature information of the image using the first material
  • the second detection unit 802 is also configured to use the hyperspectral feature information of the image of the first material and the second material to detect the hyperspectral feature information of the image of the second material before whether there is a target area in the Whether there is a target area in the detection area.
  • the device further includes: a composition unit 804, configured to select the spectral mean value of the target pixel of the image of the first material according to the hyperspectral data of the image of the first material, The first spectral matrix is formed, and the hyperspectral characteristic information of the image of the first material includes the first spectral matrix;
  • the second detection unit 802 includes: an orthogonal projection unit 805, which is used to measure the height of the detection area The spectral data is orthogonally projected on the first spectral matrix to obtain a detection result of abundance estimation, and the detection result of abundance estimation includes a description of the probability that each pixel in the detection area is the first material
  • the second detection unit 802 is further configured to detect that there is a target area in the detection area if the area covered by the target pixel is greater than the second threshold; or, if the area covered by the target pixel is the same as the area covered by the target pixel If the overlapping area of the character area range in the detection area is greater than the third threshold, it is detected that a
  • the device further includes: the composition unit 804 is further configured to select the target pixel in the image of the first material according to the hyperspectral data of the image of the first material
  • the spectral mean value of the image of the first material forms a first spectral matrix, and the hyperspectral characteristic information of the image of the first material includes the first spectral matrix
  • the constituent unit 804 is also used to calculate the hyperspectral value of the image of the second material Data, select the spectral mean value of the target pixel in the image of the second material to form a second spectral matrix, and the hyperspectral feature information of the image of the second material includes the second spectral matrix
  • the unit 802 includes: the composition unit 804, which is also used to combine the first spectral matrix and the second spectral matrix to form an end member matrix; the orthogonal projection unit 805 is also used to combine the detection area
  • the hyperspectral data is orthogonally projected on the end member matrix to obtain a detection result of abundance estimation, and the detection result of abundance estimation
  • each unit may also correspond to the corresponding description of the method embodiment shown in FIG. 6, which will not be repeated here.
  • FIG. 9 shows a schematic diagram of a possible hardware structure of the electronic device involved in the above-mentioned embodiments provided by the embodiments of this application.
  • the electronic device 900 may include: one or more processors 901, one or more memories 902, and one or more communication interfaces 903. These components can be connected through a bus 904 or in other ways.
  • FIG. 9 uses a bus connection as an example. among them:
  • the communication interface 903 can be used for the electronic device 900 to communicate with other communication devices, such as other electronic devices.
  • the communication interface 903 may be a wired interface.
  • the memory 902 may be coupled with the processor 901 through a bus 904 or an input/output port, and the memory 902 may also be integrated with the processor 901.
  • the memory 902 is used to store various software programs and/or multiple sets of instructions or data.
  • the memory 902 can be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (RAM), or can store information and instructions.
  • ROM read-only memory
  • RAM random access memory
  • the memory 902 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 902 may store an operating system (hereinafter referred to as the system), such as embedded operating systems such as uCOS, VxWorks, and RTLinux.
  • the memory 902 may also store a network communication program, which may be used to communicate with one or more additional devices, one or more user devices, and one or more electronic devices.
  • the memory can exist independently and is connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the memory 902 is used to store application program codes for executing the above solutions, and the processor 901 controls the execution.
  • the processor 901 is configured to execute application program codes stored in the memory 902.
  • the processor 901 may be a central processing unit, a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array, or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor may also be a combination for realizing certain functions, for example, including a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the processor 901 may be used to read and execute computer-readable instructions. Specifically, the processor 901 may be used to call a program stored in the memory 902, for example, an implementation program of the image detection method provided by one or more embodiments of the present application on the electronic device 900 side, and execute instructions contained in the program.
  • the electronic device 900 may be the electronic device 103 in the image detection method system shown in FIG. 1, and may be implemented as a basic service set (BSS), an extended service set (ESS), a mobile phone or a computer terminal, etc. Wait.
  • BSS basic service set
  • ESS extended service set
  • a mobile phone or a computer terminal etc. Wait.
  • the electronic device 900 shown in FIG. 9 is only an implementation of the embodiment of the present application. In practical applications, the electronic device 900 may also include more or fewer components, which is not limited here. For the specific implementation of the electronic device 900, reference may be made to the relevant description in the method embodiment shown in FIG. 3 or FIG. 5, which will not be repeated here.
  • the camera 1000 includes a fill light 1001 and a camera module 1002, wherein the fill light 1001 is used to generate compensation light of a specific waveband, and the first material and the second material are in the specific waveband. The difference of the spectral reflectance below is greater than the first threshold; the camera module 1002 is used for shooting and acquiring a target image based on the specific wavelength band.
  • the camera may also include a memory, which is used for coupling with the processor, and stores the necessary program instructions and data of the camera.
  • the camera may also include a communication interface for communicating with other devices or a communication network.
  • the apparatus 1100 may include: a processor 1101, a bus 1103, and one or more interfaces 1102 coupled to the processor 1101. among them:
  • the processor 1101 may be used to read and execute computer-readable instructions.
  • the processor 1101 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for the instruction decoding, and sends out control signals for the operation corresponding to the instruction.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations and logic operations, etc., and can also perform address operations and conversions.
  • the register is mainly responsible for storing the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 1101 can be an application-specific integrated circuit (ASIC) architecture, a microprocessor without interlocked pipeline stage architecture (MIPS) architecture, and advanced streamlining. Instruction set machine (advanced RISC machines, ARM) architecture or NP architecture, etc.
  • the processor 1101 may be single-core or multi-core.
  • the interface 1102 can be used to input data to be processed to the processor 1101, and can output the processing result of the processor 1101 to the outside.
  • the interface 1102 may be a general purpose input output (GPIO) interface, and may be connected to multiple peripheral devices (such as a display (LCD), a camera (camara), etc.).
  • the interface 1102 may be connected to the processor 1101 through the bus 1103.
  • the display and the camera may be integrated with the processor 1101. In this case, the display and the camera are part of the device 1100.
  • the processor 1101 may be configured to call a program implemented on the network device or user equipment side of the resource reservation method provided by one or more embodiments of the present application from the memory, and execute the instructions contained in the program.
  • the memory may be integrated with the processor 1101. In this case, the memory is used as a part of the apparatus 1100. Alternatively, the memory is used as an external component of the device 1100, and the processor 1101 calls the instructions or data stored in the memory through the interface 1102.
  • the interface 1102 can be used to output the execution result of the processor 1101.
  • For the resource reservation method provided by one or more embodiments of the present application reference may be made to the foregoing embodiments, and details are not described herein again.
  • the foregoing apparatus 1100 may be a communication chip or a system chip (System on a Chip, SoC).
  • SoC System on a Chip
  • processor 1101 and the interface 1102 may be implemented through hardware design, through software design, or through a combination of software and hardware, which is not limited here.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, a server or a network device, etc., specifically a processor in a computer device) to execute all or part of the steps of the above methods of the various embodiments of the present application.
  • the aforementioned storage media may include: U disk, mobile hard disk, magnetic disk, optical disk, read-only memory (Read-Only Memory, abbreviation: ROM) or Random Access Memory (Random Access Memory, abbreviation: RAM), etc.
  • U disk mobile hard disk
  • magnetic disk magnetic disk
  • optical disk read-only memory
  • Read-Only Memory abbreviation: ROM
  • Random Access Memory Random Access Memory
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the foregoing method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

一种图像检测方法及相关设备。其中,一种图像检测方法包括:获取目标图像,目标图像为根据特定波段进行补光而拍摄得到的图像,由于在该特定波段中第一材料和第二材料的光谱反射率的差值大于第一阈值,目标图像中不同材料的像素点灰度值也不同,该第一材料可以为真实车牌材料,第二材料可以为非真实车牌材料,根据目标图像中检测区域的目标像素点,能够检测该检测区域中是否存在不在真实车牌材料的灰度值范围内的区域,或者存在在非真实车牌材料的灰度值范围内的区域,如果是,则生成第一检测结果,如果否,则生成第二检测结果。采用本申请实施例,能够有效区分真实车牌和伪造车牌。

Description

一种图像检测方法及相关设备 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像检测方法及相关设备。
背景技术
汽车号牌是准许汽车上道行驶的法定凭证,是道路交通管理部门、社会治安管理部门及广大人民群众监督汽车行驶情况,识别、记忆与查找的凭证。现有一些假车牌通过粘贴塑料号码、磁力金属片,以及一些蓝色遮挡来伪造车牌数字号码,这些伪造号牌传统图像难以检测出异常,欺骗电子眼监控,有些伪造车牌人眼都难以分辨,最后逃脱法律的制裁。
真假车牌识别首先要求系统能够获得车牌的图像、以及车牌的位置信息。传统的方案采用普通的RGB摄像机拍摄包含车辆和车牌的可见光图像,然后利用图像处理或机器学习的方法,定位出车牌的位置。为了检测出虚假车牌(主要是手持打印车牌、手机电子车牌等),车牌检测算法通过提取车牌的形状特征、车牌与车辆的运动特征,来判断当前的车牌是否与真实车牌特性一致,从而识别出假车牌。
但是,只要采用车牌贴纸、塑料号码、磁力金属片等具有和真实车牌颜色一致的假号码,替换或遮挡真实车牌中的某一个或某几个字符,且保证边缘无缝衔接,那么这些传统的方法基本失效。如果没有人工介入、且近距离仔细观察,则几乎没有发现假车牌的可能。因此,如何有效地区分车牌中伪装以及假目标是本领域技术人员正在研究的问题。
发明内容
本申请实施例提供一种图像检测方法及相关设备,以有效地区分车牌图像中真实车牌以及伪造车牌。
第一方面,本申请实施例提供了一种图像检测方法,包括:
获取目标图像,所述目标图像为根据特定波段进行补光而拍摄得到的图像,所述目标图像中包括检测区域;其中,第一材料和第二材料在所述特定波段中的光谱反射率的差值大于第一阈值;根据所述检测区域中的目标像素点,检测目标图像中是否存在目标区域;所述目标像素点包括:灰度值不在第一预设范围内的像素点,或者灰度值在第二预设范围内的像素点,所述第一预设范围包括所述第一材料的灰度值范围,所述第二预设范围包括所述第二材料的灰度值范围;当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
本申请实施例通过该图像检测方法,首先获取目标图像,目标图像为根据特定波段进行补光而拍摄得到的图像,由于在该特定波段中第一材料和第二材料的光谱反射率的差值大于第一阈值,目标图像中不同材料的像素点灰度值也不同,该第一材料可以为真实车牌材料,第二材料可以为非真实车牌材料,例如金属、塑料、胶带等等,根据目标图像中检测区域的目标像素点,能够检测该检测区域中是否存在目标区域,即是检验待检测车牌图像中是否有不在真实车牌材料的灰度值范围内的区域,或者在非真实车牌材料的灰度值范围内的区域,如果是,则生成第一检测结果,如果否,则生成第二检测结果。这种通过在目标图像上显示出第一材料和第二材料的差异的方式,能够有效区分真实车牌和伪造车牌。
在一种可能的实现方式中,所述获取目标图像之前,还包括:通过高光谱摄像机采集所述第一材料的高光谱数据和所述第二材料的高光谱数据;根据所述第一材料的高光谱数据和所述第二材料的高光谱数据确定所述特定波段的范围。本申请实施例通过高光谱摄像 机采集第一材料和第二材料的高光谱数据来确定光谱反射率差异最大的波段范围,能够准确的获取特定波段,使得后续的检测过程中可以有效区分出目标图像中的真实车牌材料和伪造车牌材料。
在一种可能的实现方式中,所述特定波段的范围包括550nm~700nm。在该范围内的波段为真实车牌材料与伪造车牌材料的光谱反射率相差最大的波段,利用该范围内的波段进行补光拍摄可以有效区分出目标图像中的真实车牌和伪造车牌。
在一种可能的实现方式中,根据所述检测区域的目标像素点,检测所述检测区域中是否存在目标区域包括:从所述目标图像中提取检测区域;若所述检测区域的目标像素点覆盖的范围大于第二阈值,则检测出所述检测区域中存在目标区域;或者若所述检测区域的目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域。本申请实施例通过确定检测区域的目标像素点,来判断检测区域是否存在目标区域,判断方式可以是检测目标像素点覆盖的面积或者检测目标像素点覆盖的区域范围与检测区域中字符区域范围的重叠面积,这种方式方便且低成本的实现了对真实车牌和伪造车牌的区分。
在一种可能的实现方式中,所述目标图像包括车辆图像,所述检测区域包括所述车辆图像中的车牌区域,或者所述车牌图像中的一部分,所述第一材料为真实车牌材料,所述第二材料为非真实车牌材料。相应的,所述图像检测方法可以具体为一种检测车辆图像中车牌真伪的方法。所述第一检测结果描述了所述车牌区域为非真实车牌;所述第二检测结果描述了所述车牌区域为真实车牌。这种方式方便且低成本的实现了对真实车牌和非真实车牌的区分。
第二方面,本申请实施例提供了一种图像检测方法,包括:获取第一材料的图像的高光谱特征信息,以及获取目标图像的高光谱特征信息,所述高光谱特征信息描述了图像的光谱信息,所述目标图像中包括检测区域;利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域,所述目标区域中包括:与所述第一材料的高光谱特征信息不匹配的区域;当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
本申请实施例通过该图像检测方法,首先获取第一材料的图像的高光谱特征信息以及目标图像的高光谱特征信息,该第一材料可以为真实车牌材料,然后利用获取的真实车牌材料的图像的高光谱特征信息检验检测区域中是否存在目标区域,即是检验待检测车牌图像中是否有与真实车牌材料的高光谱特征信息不匹配的区域,如果有,则生成第一检测结果,如果没有,则生成第二检测结果。由于高光谱图像中不同物质的光谱不同,利用真实车牌的高光谱数据,来对待检测车牌进行对比检测,避免了传统图像检测只能通过特征识别的弊端,利用高光谱图像中不同物质光谱不同的优势,有效区分了真实车牌和伪造车牌。
在一种可能的实现方式中,所述利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域之前,还包括:获取第二材料的图像的高光谱特征信息;利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域,包括:利用所述第一材料和所述第二材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域。在本申请实施例中,可以获取第一材料和第二材料的图像的高光谱特征信息,该第一材料可以为真实车牌材料,第二材料可以为非真实车牌材料,在后续检测的过程中, 利用第一材料和第二材料的图像的高光谱特征信息共同检测目标图像中是否存在目标区域,提高了检测的准确度。
在一种可能的实现方式中,所述获取第一材料的图像的高光谱特征信息包括:根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;所述利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域包括:将所述检测区域的高光谱数据在所述第一光谱矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
在本申请实施例中首先获取第一材料的图像的高光谱数据,通过该高光谱数据组成第一材料的光谱矩阵,然后在第一光谱矩阵上进行正交投影的方式确定检测区域中每个像素点为第一材料的概率,能够确定出检测区域中不为第一材料的像素点,再通过检测不为第一材料的像素点覆盖的面积大小或者与检测区域中字符区域范围的重叠面积大小,来判断是否为目标区域。
在一种可能的实现方式中,所述获取第一材料的图像的高光谱特征信息包括:根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像中的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;所述获取第二材料的图像的高光谱特征信息包括:根据所述第二材料的图像的高光谱数据,选取所述第二材料的图像中的目标像素点的光谱均值,组成第二光谱矩阵,所述第二材料的图像的高光谱特征信息包括所述第二光谱矩阵;所述利用所述第一材料和所述第二材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域包括:将所述第一光谱矩阵和所述第二光谱矩阵共同组成端元矩阵;将所述检测区域的高光谱数据在所述端元矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。在本申请实施例中首先获取第一材料和第二材料的图像的高光谱数据,通过该高光谱数据组成第一材料的光谱矩阵和第二材料的光谱矩阵,然后通过在第一光谱矩阵和第二光谱矩阵共同组成的端元矩阵上进行正交投影的方式,确定检测区域中每个像素点为第一材料或第二材料的概率,能够确定出检测区域中不为第一材料的像素点或为第二材料的像素点,再通过检测这些像素点覆盖的面积大小或者与检测区域中字符区域范围的重叠面积大小,来判断是否为目标区域,有效区分了检测区域中的第一材料和第二材料,以及检测目标对象是否存在目标区域。
第三方面,本申请实施例提供了一种图像检测装置,包括:
第一获取单元,用于获取目标图像,所述目标图像为根据特定波段进行补光而拍摄得 到的图像,所述目标图像中包括检测区域;其中,第一材料和第二材料在所述特定波段中的光谱反射率的差值大于第一阈值;
第一检测单元,根据所述检测区域的目标像素点,检测所述检测区域中是否存在目标区域;所述目标像素点包括:灰度值不在第一预设范围内的像素点,或者灰度值在第二预设范围内的像素点,所述第一预设范围包括所述第一材料的灰度值范围,所述第二预设范围包括所述第二材料的灰度值范围;
第一生成单元,用于当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
在一种可能的实现方式中,所述装置还包括:采集单元,用于所述获取目标图像之前,通过高光谱摄像机采集所述第一材料的高光谱数据和所述第二材料的高光谱数据;确定单元,用于根据所述第一材料的高光谱数据和所述第二材料的高光谱数据确定所述特定波段的范围。
在一种可能的实现方式中,所述特定波段的范围包括550nm~700nm。
在一种可能的实现方式中,所述第一检测单元具体包括:提取单元,用于从所述目标图像中提取检测区域;所述第一检测单元,还用于若所述检测区域的目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者若所述检测区域的目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域。
应当理解的是,本申请的第三方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
第四方面,本申请实施例提供了一种图像检测装置,包括:
第二获取单元,用于获取第一材料的图像的高光谱特征信息,以及获取目标图像的高光谱特征信息,所述高光谱特征信息描述了图像的光谱信息,所述目标图像中包括检测区域;
第二检测单元,用于利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域,所述目标区域中包括:与所述第一材料的高光谱特征信息不匹配的区域;
第二生成单元,用于当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
在一种可能的实现方式中,所述装置还包括:所述第二获取单元,还用于在所述利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域之前,获取第二材料的图像的高光谱特征信息;
所述第二检测单元,还用于利用所述第一材料和所述第二材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域。
在一种可能的实现方式中,所述装置还包括:组成单元,用于根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;所述第二检测单元包括:正交投影单元,用于将所述检测区域的高光谱数据在所述第一光谱矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述 第一材料的概率的描述;所述第二检测单元,还用于若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
在一种可能的实现方式中,所述装置还包括:所述组成单元,还用于根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像中的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;所述组成单元,还用于根据所述第二材料的图像的高光谱数据,选取所述第二材料的图像中的目标像素点的光谱均值,组成第二光谱矩阵,所述第二材料的图像的高光谱特征信息包括所述第二光谱矩阵。所述第二检测单元包括:所述组成单元,还用于将所述第一光谱矩阵和所述第二光谱矩阵共同组成所述端元矩阵;所述正交投影单元,还用于将所述检测区域的高光谱数据在所述端元矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;所述第二检测单元,还用于若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
应当理解的是,本申请的第四方面与本申请的第二方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
第五方面,本申请实施例提供一种终端设备,该终端设备中包括处理器,处理器被配置为支持该终端设备实现第一方面提供的图像检测方法中相应的功能。该终端设备还可以包括存储器,存储器用于与处理器耦合,其保存该终端设备必要的程序指令和数据。该终端设备还可以包括通信接口,用于该网络设备与其他设备或通信网络通信。应当理解的是,本申请的第五方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
第六方面,本申请实施例提供一种摄像机,该摄像机中包括补光灯和摄像模块,其中,所述补光灯用于产生特定波段的补偿光,其中,第一材料和第二材料在所述特定波段下的光谱反射率的差值大于第一阈值;所述摄像模块用于基于所述特定波段拍摄并获取目标图像。该摄像机还可以包括存储器,存储器用于与处理器耦合,其保存该摄像机必要的程序指令和数据。该摄像机还可以包括通信接口,用于与其他设备或通信网络通信。应当理解的是,本申请的第六方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
第七方面,本申请实施例提供一种计算机可读存储介质,用于储存为上述第三方面或第四方面提供的一种图像检测装置所用的计算机软件指令,其包含用于执行上述方面所设计的程序。
第八方面,本申请实施例提供了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行上述第三方面或第四方面中的图像检测装置所执行的流程。
第九方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持电子设备 实现上述第一方面或第二方面中所涉及的功能,例如,生成或处理上述图像检测方法中所涉及的信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存数据发送设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1是本申请实施例提供的一种图像检测方法的系统架构示意图;;
图2是本申请实施例提供的一种图像检测方法的流程示意图;
图3是本申请实施例提供的一种获取目标图像的应用场景示意图;
图4是本申请实施例提供的一种不同材料光谱差异的示意图;
图5是本申请实施例提供的一种目标图像检测结果的应用场景示意图;
图6是本申请实施例提供的另一种图像检测方法的流程示意图;
图7是本申请实施例提供的一种图像检测装置的结构示意图;
图8是本申请实施例提供的另一种图像检测装置的结构示意图;
图9是本申请实施例提供的一种电子设备的硬件结构示意图;
图10是本申请实施例提供的一种摄像机的硬件结构示意图;
图11为本申请实施例提供的一种通信芯片的结构示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
首先,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。
(1)高光谱图像:光谱分辨率在10l数量级范围内的光谱图像称为高光谱图像,所谓高光谱图像就是在光谱维度上进行了细致的分割,不仅仅是传统所谓的黑、白或者红、绿、蓝的区别,而是在光谱维度上也有N个通道,例如:我们可以把400nm-1000nm分为300个通道。因此,通过高光谱设备获取到的是一个数据立方,不仅有图像的信息,并且在光谱维度上进行展开,结果不仅可以获得图像上每个点的光谱数据,还可以获得任一个谱段的影像信息。也即是说,高光谱图像集样本的图像信息与光谱信息于一身。图像信息可以反映样本的大小、形状、缺陷等外部品质特征,由于不同成分对光谱吸收也不同,在某个特定波长下图像对某个缺陷会有较显著的反映,而光谱信息能充分反映样品内部的物理结构、化学成分的差异。
(2)RGB(Red,Green,Blue):RGB色彩模式是工业界的一种颜色标准,是通过对红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的,RGB即是代表红、绿、蓝三个通道的颜色,这个标准几乎包括了人类视力所能感知的所有颜色,是目前运用最广的颜色系统之一。
(3)HSV(Hue,Saturation,Value):HSV是根据颜色的直观特性创建的一种颜色空间,也称六角锥体模型(Hexcone Model)。这个模型中颜色的参数分别是:色调(H)、饱和度(S)、亮度(V)。其中,色调H:用角度度量,取值范围为0°~360°,从红色开始按逆时针方向计算,红色为0°,绿色为120°,蓝色为240°。它们的补色是:黄色为60°,青色为180°,品红为300°;饱和度S:表示颜色接近光谱色的程度。一种颜色,可以看成是某种光谱色与白色混合的结果。其中光谱色所占的比例愈大,颜色接近光谱色的程度就愈高,颜色的饱和度也就愈高。饱和度高,颜色则深而艳。光谱色的白光成分为0,饱和度达到最高。通常取值范围为0%~100%,值越大,颜色越饱和;亮度V:表示颜色明亮的程度,对于光源色,明度值与发光体的光亮度有关;对于物体色,此值和物体的透射比或反射比有关。通常取值范围为0%(黑)到100%(白)。
(4)端元:相当于一个像素点里的亚像素点,只包含一种地物的光谱信息,根据多光谱或高光谱的分辨率可以提取出来。假设有2个像素点,其中一个像素点中有A、B、C三种地物,那么该像素点就称为混合像素点;另一个像素点中只有单一的一种地物,那么该像素点就称为纯净像素点,可以作为一种端元。
(5)丰度:端元只包含一种地物信息,一般的像素点都为混合像素点,包括多种地物,在进行混合像素点分解的时候,可以对一个像素点中包括的几种端元进行定量描述,求得每个像素点中几种端元在这个像素点中的面积百分比,即端元的丰度。
其次,为了便于理解本申请实施例,以下具体分析本申请实施例所需要解决的技术问题以及对应的应用场景。在对车牌图像进行检测的过程中,大多数采用普通的RGB摄像机拍摄包含车辆和车牌的可见光图像,然后利用图像处理或机器学习的方法,定位出车牌的位置。为了检测出虚假车牌(主要是手持打印车牌、手机电子车牌等),车牌检测算法通过提取车牌的形状特征、车牌与车辆的运动特征,来判断当前的车牌是否与真实车牌特性一致,从而识别出假车牌。然而,仔细考虑上述过程可以发现,目前典型的方法主要运用了车牌的形状、轮廓等信息,因此,只要采用车牌贴纸、塑料号码、磁力金属片等具有和真实车牌颜色一致的假号码,替换或遮挡真实车牌中的某一个或某几个字符,且保证边缘无缝衔接,那么这些传统的方法基本失效。如果没有人工介入、且近距离仔细观察,则几乎 没有发现假车牌的可能。
因此,针对上述技术问题,本申请主要解决的问题为如何有效区分待检测车牌是否为被其他材料遮挡的伪造车牌,且在鉴别为伪造车牌后,提供待检测车牌被遮挡的具体位置。
基于上述提出的技术问题以及本申请中对应的应用场景,也为了便于理解本申请实施例,下面先对本申请实施例所基于的其中一种系统架构进行描述。请参阅图1,图1是本申请实施例提供的一种系统构架示意图。本申请中的系统构架可以包括图1中的目标车辆101、摄像设备102和一个或多个电子设备103,其中,摄像设备102和电子设备103之间可通过有线或者无线的方式或者其他通信方式进行通信。在本系统中,摄像设备102用于拍摄目标车辆101,获取到目标车辆101的图像后,发送到电子设备103。其中,
摄像设备102可以是完成图像分解和光电信号转换的器件,包括红外摄像头、黑白摄像头、高光谱摄像头等等,图像分解是把一幅完整图像分解成若干独立的像素(构成电视图像画面的最小单元)的过程。一般来说,像素的数目愈多,图像愈清晰。每个像素只用单一的颜色和亮度表示。摄像设备能把图像中各像素的光信号转变成相应的电信号,再按一定的顺序传送到输出端。
电子设备103可以是通信终端、移动设备、用户终端、移动终端、无线通信设备、便携式终端、用户代理、用户装置、服务设备或用户设备(User Equipment,UE)等电子设备,主要用于数据的输入以及处理结果的输出或显示等,也可以是安装于或运行于上述任一一设备上的软件客户端、应用程序等。例如,终端可以是移动电话、无绳电话、智能手表、可穿戴设备、平板设备、具备无线通信功能的手持设备、计算设备、车载通信模块、智能电表或连接到无线调制解调器的其它处理设备等。当电子设备103为服务器时,电子设备103接收到摄像设备102发送的车辆图像或影像数据,利用基于高光谱色彩分布的方法或智能检测算法等方式可以检测出图像中车牌的精确位置,再利用真假车牌的光谱信息差异,判断该图像中车牌是否为真实车牌,如有遮挡,进一步判断遮挡位置,然后可以将检测结果输出到其他显示设备或终端设备上;当电子设备103为终端设备时,电子设备103接收到摄像设备102发送的图像或影像数据,利用基于高光谱色彩分布的方法或智能检测算法等方式可以检测出图像中车牌的精确位置,再利用真假车牌的光谱信息差异,判断该图像中车牌是否为真实车牌,如有遮挡,进一步判断遮挡位置,然后可以将检测结果输出到该电子设备103的显示设备上。
可以理解的是,图1中的网络架构只是本申请实施例中的一种示例性的实施方式,本申请实施例中的系统架构包括但不仅限于以上系统架构。
基于上述系统架构,下面结合实施例以及附图详细描述本申请提供的图像检测方法。
请参见图2,图2是本申请实施例提供的一种图像检测方法,该方法包括但不限于如下步骤:
步骤S201:获取目标图像。
具体地,电子设备获取目标图像,目标图像为根据特定波段进行补光而拍摄得到的图像,该目标图像中包括检测区域;其中,第一材料和第二材料在特定波段中的光谱反射率的差值大于第一阈值,电子设备可以从具有补光功能的摄像设备中获取目标图像,也可以 使用自身的具有补光功能的摄像设备直接获取目标图像。举例来说,摄像设备可以为通过有线或无线的方式与电子设备连接的黑白摄像机,黑白摄像机通过特定波段补光,拍摄到目标车辆的黑白图像,若该黑白图像中的目标对象图像区域清晰完整,则输出到电子设备上,电子设备获取到目标图像的黑白图像。由于在该特定波段中第一材料和第二材料的光谱反射率的差值大于第一阈值,则通过摄像设备获取的目标图像中第一材料和第二材料的图像灰度值差异较大。
举例来说,如图3所示,示例性的,图3为获取的目标图像,目标图像中包括目标车辆1、目标车辆2和目标车辆3,以及目标车辆的车牌区域(即检测区域),在该图像中,车牌区域为清晰完整的区域,由于该目标图像为使用了特定波段补光而获得的图像,第一材料可以为真实车牌蓝色底牌材料和真实车牌白色号牌材料,第二材料可以为伪装金属材料、伪装贴纸材料、伪装胶带材料等等,则该特定波段为真实车牌蓝色底牌材料、真实车牌白色号牌材料和伪装金属材料、伪装贴纸材料、伪装胶带材料在同一波段上的光谱反射率相差大于第一阈值的波段,其中,光谱反射率相差大于第一阈值可以是,真实车牌蓝色底牌材料的光谱反射率在该特定波段内与伪装金属材料、伪装贴纸材料、伪装胶带材料的光谱反射率分别相差的数值的平均值大于第一阈值,同时真实车牌白色号牌材料的光谱反射率在该特定波段内与伪装金属材料、伪装贴纸材料、伪装胶带材料的光谱反射率分别相差的数值的平均值大于第一阈值;光谱反射率相差大于第一阈值也可以是,真实车牌蓝色底牌材料的光谱反射率在该特定波段内与伪装金属材料、伪装贴纸材料、伪装胶带材料的光谱反射率分别相差的数值的总和大于第一阈值,同时真实车牌白色号牌材料的光谱反射率在该特定波段内与伪装金属材料、伪装贴纸材料、伪装胶带材料的光谱反射率分别相差的数值的总和大于第一阈值,此处不做限制;此时目标图像中应该能够明显区分第一材料(真实车牌蓝色底牌材料、真实车牌白色号牌材料)以及第二材料(伪装金属材料、伪装贴纸材料、伪装胶带材料)的像素点在灰度值上的区别。
在其中一个实施方式中,在电子设备获取目标图像之前,通过高光谱摄像机采集第一材料的高光谱数据和第二材料的高光谱数据,通过第一材料和第二材料的高光谱特征信息确定特定波段的范围,高光谱特征信息描述了第一材料和第二材料的光谱信息和图像信息,该高光谱特征信息包括第一材料和第二材料在不同波段下的平均光谱反射率、光谱均值、光谱矩阵、平均光谱辐照度、分辨率等信息。根据第一材料和第二材料的高光谱特征信息确定特定波段,该特定波段为第一材料和所述第二材料在同一波段上的光谱反射率相差最大的波段,举例来说,第一材料为真实车牌材料,第二材料为包括金属、贴纸、胶带等材料的非真实车牌材料,如图4所示,图4为真实车牌蓝色底牌材料、真实车牌白色号牌材料、伪装金属材料、伪装贴纸材料以及伪装胶带材料的光谱曲线,该光谱曲线即为第一材料和第二材料的高光谱特征信息,由于在特定波段内第一材料和第二材料的光谱反射率相差大于第一阈值,由图易知,为了在拍摄得到的目标图像中较明显的区分出第一材料和第二材料,该特定波段可以在在500nm左右,例如550nm~700nm之间。
步骤S202:从目标图像中提取检测区域。
具体地,电子设备获取到目标图像后,从目标图像中提取检测区域,举例来说,目标图像为目标车辆的图像,则检测区域为目标车辆中的车牌图像区域,电子设备获取到利用特定波段补光拍摄的目标车辆的图像后,使用车牌检测算法从目标车辆的图像中提取待检 测车牌的图像区域。
步骤S203:检测该检测区域中是否存在目标区域。
具体地,电子设备从目标图像中提取检测区域后,检测该检测区域中是否存在目标区域,电子设备通过遍历检测区域中的所有像素点,将检测区域中所有灰度值不在第一预设范围内的像素点或者灰度值在第二预设范围的像素点判定为目标像素点,其中第一预设范围包括第一材料的灰度值范围,第二预设范围包括第二材料的灰度值范围,目标像素点覆盖的范围包括目标区域,根据目标像素点确定目标区域,即所有目标像素点组成的区域可以为目标区域。
举例来说,检测区域为目标车辆中的车牌图像区域,遍历该车牌图像区域中的所有像素点(或者选取一定范围内的像素点),由于该车牌图像区域为使用了特定波段补光而获得的图像,该特定波段内真实车牌材料和非真实车牌材料的光谱反射率(可以是反射率的均值)相差大于第一阈值,则该车牌图像区域中真实车牌材料和非真实车牌材料之间像素点的灰度值是不同的,根据不同材料之间像素点的灰度值差异,设置第一预设范围和第二预设范围,该第一预设范围为真实车牌材料的图像的灰度值范围,该第二预设范围为非真实车牌材料的图像的灰度值范围,车牌图像区域中灰度值不在第一预设范围内的像素点判定为目标像素点,以及灰度值在第二预设范围内的像素点也判定为目标像素点,即目标像素点覆盖的区域属于非真实车牌材料的区域。也即是说,目标区域可以是车牌上的伪造区域,包括把车牌上的一部分区域用其他材料覆盖或者替换,例如把车牌上的“0”用其他材料替换成“8”。下面具体介绍三种根据目标像素点确定目标区域(非真实车牌材料)的方式:
方式一,若目标像素点覆盖的区域面积大于第二阈值,则检测出检测区域中存在目标区域。举例来说,第二阈值可以是50平方厘米,检测车牌图像区域中的所有像素点,若目标像素点覆盖的面积大于50平方厘米,则判定在该车牌图像区域中存在目标区域,该目标区域包括所有目标像素点覆盖的区域范围,这种方式避免了车牌中遮挡部分很小且不影响识别白色号牌,从而导致判断为伪造车牌的情况,提高了检测的精确度。
方式二,若目标像素点覆盖的区域范围与检测区域中字符区域范围的重叠面积大于第三阈值,则检测出检测区域中存在目标区域。举例来说,第三阈值为15平方厘米,检测车牌图像区域的所有像素点,若目标像素点覆盖的区域范围与车牌图像区域的白色号牌区域的重叠面积大于15平方厘米,则判定在该车牌图像区域中存在目标区域,该目标区域为所有目标像素点覆盖的区域范围,这种方式避免了车牌中的遮挡物只遮挡了车牌的蓝色底部而没有遮挡白色号牌(不影响识别白色号牌),从而导致判断为伪造车牌的情况,提高了检测的精确度。
方式三,若目标像素点覆盖的区域面积超过某个范围,则检测出检测区域中存在目标区域。举例来说,该范围可以为检测区域的5%,检测车牌图像区域的所有像素点,若目标像素点覆盖的区域面积占车牌图像区域的总面积的比例超过5%,则判定在该车牌图像区域中存在目标区域,该目标区域为所有目标像素点覆盖的区域范围。
步骤S204:生成结果。
具体地,检测该检测区域是否存在目标区域,当检测到存在目标区域,生成第一检测结果;当检测到不存在目标区域,生成第二检测结果;其中,第一检测结果可以描述检测区域存在目标区域以及目标区域的具体位置,第二检测结果可以描述检测区域不存在目标 区域。举例来说,当检测到待检测车牌为伪造车牌,即存在伪造区域,第一检测结果可以是如图5的车牌检测结果图,该车牌检测结果图描述了该车牌的伪造位置以及伪造的材料,即目标车辆1为金属伪造、目标车辆2为胶带伪造、目标车辆3为塑料伪造,而当检测到待检测车牌为真实车牌,即不存在伪造区域,第二检测结果可以是“不存在伪造区域”或“车牌为真实车牌”等文字或图像或语音描述。
在图2所描述的方法中,首先获取目标图像,目标图像为根据特定波段进行补光而拍摄得到的图像,由于在该特定波段中第一材料和第二材料的光谱反射率的差值大于第一阈值,目标图像中不同材料的像素点灰度值也不同,该第一材料可以为真实车牌材料,第二材料可以为非真实车牌材料,例如金属、塑料、胶带等等,根据目标图像中检测区域的目标像素点,能够检测该检测区域中是否存在目标区域,即是检验待检测车牌图像中是否有不在真实车牌材料的灰度值范围内的区域,或者在非真实车牌材料的灰度值范围内的区域,如果是,则生成第一检测结果,如果否,则生成第二检测结果。这种通过在目标图像上显示出第一材料和第二材料的差异的方式,能够有效区分真实车牌和伪造车牌。
请参见图6,图6是本申请实施例提供的另一种图像检测方法,该方法包括但不限于如下步骤:
步骤S601:获取第一材料的图像的高光谱特征信息,以及获取目标图像的高光谱特征信息。
具体地,高光谱特征信息描述了物体的图像在高光谱下的特征,与物体的材质、光谱反射率相关。高光谱特征信息描述可以由光谱辐射曲线进行描述,光谱辐射曲线也可以称之为光谱分布。具体而言,高光谱图像相较于普通图像增加了光谱维度,因此光谱维度是高光谱图像相对于普通图像特有的特征信息,该特征信息可以用光谱辐射曲线进行描述。在实际应用中,可以选择一定范围内的像素点(例如50×50的像素点)进行采样,把这个采样范围内的像素点的光谱辐射曲线均值作为这个范围内的光谱辐射曲线的代表,以便于在后续步骤中与预设的光谱辐射曲线进行对比,判断二者是否匹配。高光谱特征信息除了用光谱辐射曲线描述之外,还可以用高光谱图像的光谱分布进行描述。
电子设备获取第一材料的图像的高光谱特征信息以及目标图像的高光谱特征信息,该高光谱特征信息包括在不同波段下的平均光谱反射率、光谱均值、光谱矩阵、辐射值、分辨率等信息;该目标图像包括检测区域,该检测区域即为待检测图像区域。举例来说,第一材料可以为真实车牌材料,目标图像可以是包含了车牌图像区域的目标车辆图像。
在其中一个实现方式中,电子设备获取第一材料的图像的高光谱数据,其中,高光谱数据可以包括在不同波段下的平均光谱反射率、光谱均值、光谱矩阵、平均光谱辐照度、分辨率等信息,根据第一材料的图像的高光谱数据,选取第一材料的图像的像素点,选取的像素点的光谱均值(或者称为光谱曲线的均值)组成第一光谱矩阵(光谱均值曲线可以看成一个列矢量,不同种类目标的列矢量堆起来,就成为光谱矩阵),该第一光谱矩阵即为第一材料的图像的高光谱特征信息,该第一光谱矩阵可以用于后续的检测,也即是说,电子设备可以首先获取第一材料的图像的高光谱数据,然后处理该高光谱数据以获取第一材料的图像的高光谱特征信息。
在其中一个实现方式中,电子设备还可以获取第二材料的图像的高光谱数据,即电子 设备获取第一材料和第二材料的图像的高光谱数据,其中,高光谱数据可以包括在不同波段下的平均光谱反射率、平均光谱辐照度、分辨率等信息,根据第一材料的图像的高光谱数据,选取第一材料的图像的像素点,选取的像素点的光谱均值组成第一光谱矩阵,该第一光谱矩阵即为第一材料的图像的高光谱特征信息;根据第二材料的图像的高光谱数据,分别选取第二材料的图像中的像素点,将选取的像素点的光谱均值组成第二光谱矩阵,该第二光谱矩阵即为第二材料的图像的高光谱特征信息,也即是说,电子设备可以首先获取第一材料和第二材料的图像的高光谱数据,然后处理该高光谱数据以获取第一材料和第二材料的图像的高光谱特征信息。举例来说,第一材料为真实车牌材料,例如:真实车牌由蓝色底牌和五个白色号牌组成,选取该真实车牌材料的蓝色底牌的多个像素点的光谱均值,以及五个白色号牌的多个像素点的光谱均值,组成第一光谱矩阵,第二材料可以为金属、塑料和胶带,这些材料可以作为遮挡真实车牌或伪造真实车牌的材料,选取金属材料的多个像素点的光谱均值组成金属光谱矩阵,选取塑料材料的多个像素点的光谱均值组成塑料光谱矩阵,选取胶带材料的多个像素点的光谱均值组成胶带光谱矩阵,即第二光谱矩阵包括金属光谱矩阵、塑料光谱矩阵和胶带光谱矩阵。
步骤S602:检测该检测区域中是否存在目标区域。
具体地,电子设备获取第一材料的图像的高光谱特征信息以及目标图像的高光谱特征信息后,利用第一材料的图像的高光谱特征信息,检测目标图像中的检测区域中是否存在目标区域,该目标区域包括与第一材料的图像的高光谱特征信息不匹配的区域,也即是说,检测该检测区域若与第一材料的图像的高光谱特征信息匹配,则不存在目标区域。举例来说,检测区域为车牌图像区域,第一材料为真实车牌材料,将车牌图像区域的高光谱数据与第一材料的图像的高光谱特征信息作分析匹配,若车牌图像区域的高光谱数据中存在与真实车牌材料的图像的高光谱特征信息不匹配的区域,则该区域为目标区域(非真实车牌区域)。
举例来说,将检测区域的高光谱数据在真实车牌材料的第一光谱矩阵上进行正交投影,以得到丰度估计检测结果,该丰度估计检测结果描述了检测区域中每个像素点为真实车牌材料的概率,根据该丰度估计检测结果将概率不满足预设条件的像素点判定为目标像素点,举例来说,丰度估计检测结果描述了车牌图像区域中每个像素点的高光谱数据与第一材料(真实车牌材料)匹配的概率P1,预设条件为P1大于0.9,即若得出与真实车牌材料匹配的概率P1小于或等于0.9,则该像素点为目标像素点,即与真实车牌材料不匹配。
在其中一个实现方式中,电子设备获取第一材料和第二材料的图像的高光谱特征信息以及目标图像的高光谱特征信息后,利用第一材料和第二材料的图像的高光谱特征信息,检测目标图像中检测区域是否存在目标区域,该目标区域包括与第一材料的图像的高光谱特征信息不匹配的区域,也即是说,检测该检测区域若与第一材料的图像的高光谱特征信息匹配,则不存在目标区域,可以理解的,该目标区域还可以包括与第二材料的图像的高光谱特征信息匹配的区域。举例来说,检测区域为车牌图像区域,第一材料为真实车牌材料,第二材料可以为遮挡真实车牌或伪造真实车牌的材料,将车牌图像区域的高光谱数据与第一材料和第二材料的图像的高光谱特征信息作分析匹配,若车牌图像区域的高光谱数据中存在与真实车牌材料的图像的高光谱特征信息不匹配的区域,而与第二材料的图像的高光谱特征信息匹配的区域,则该区域为目标区域(非真实车牌区域)。
举例来说,将第一材料的第一光谱矩阵和第二材料的第二光谱矩阵共同组成端元矩阵,具体来说,每种材料的光谱信息可以表示成一个d维度的列向量,d表示光谱通道数,将不同材料的光谱向量进行横向拼接,以共同组成端元矩阵,例如金属光谱矩阵(这边叫做矩阵的原因是金属材料可以包括多种,例如铝、铁、合金等材料)为M1,塑料光谱矩阵为M2,胶带等高分子材料光谱矩阵为M3,真实车牌材料的第一光谱矩阵为Mp,则共同组成的端元矩阵为M=[Mp M1 M2 M3],通过该端元矩阵M可以检测出待检测车牌为真实车牌还是被金属、塑料或胶带材料所遮挡或伪造的假车牌;将检测区域的高光谱数据在该端元矩阵(真实车牌材料的第一光谱矩阵和金属光谱矩阵、塑料光谱矩阵、胶带光谱矩阵共同组成)上进行正交投影,以得到丰度估计检测结果,该丰度估计检测结果描述了检测区域中每个像素点为第一材料的概率和/或为第二材料的概率,根据该丰度估计检测结果将概率不满足预设条件的像素点判定为目标像素点,举例来说,丰度估计检测结果描述了车牌图像区域中每个像素点的高光谱数据与第一材料(真实车牌材料)匹配的概率P1以及与第二材料(非真实车牌材料)匹配的概率P2,预设条件为P1大于0.8且P1大于P2,即若得出与真实车牌材料匹配的概率P1小于或等于0.8或P1小于或等于与非真实车牌材料匹配的概率P2,则该像素点为目标像素点,即与真实车牌材料不匹配。
下面具体介绍三种根据目标像素点确定目标区域的方式:
方式一,若目标像素点覆盖的区域面积大于第二阈值,则检测出检测区域中存在目标区域。举例来说,第二阈值为50平方厘米,检测车牌图像区域的所有像素点,若目标像素点覆盖的区域面积大于50平方厘米,则判定在该车牌图像区域中存在目标区域,该目标区域为所有目标像素点覆盖的区域范围,这种方式避免了车牌中遮挡部分很小且不影响识别白色号牌,从而导致判断为伪造车牌的情况,提高了检测的精确度。
方式二,若目标像素点覆盖的区域范围与检测区域中字符区域范围的重叠面积大于第三阈值,则检测出检测区域中存在目标区域。举例来说,第三阈值为15平方厘米,检测车牌图像区域的所有像素点,若目标像素点覆盖的区域范围与车牌图像区域的白色号牌区域的重叠面积大于15平方厘米,则判定在该车牌图像区域中存在目标区域,该目标区域为所有目标像素点覆盖的区域范围,这种方式避免了车牌中的遮挡物只遮挡了车牌的蓝色底部而没有遮挡白色号牌(不影响识别白色号牌),从而导致判断为伪造车牌的情况,提高了检测的精确度。
方式三,若目标像素点覆盖的区域面积超过某个范围,则检测出检测区域中存在目标区域。举例来说,该范围可以为检测区域的5%,检测车牌图像区域的所有像素点,若目标像素点覆盖的区域面积占车牌图像区域的总面积的比例超过5%,则判定在该车牌图像区域中存在目标区域,该目标区域为所有目标像素点覆盖的区域范围。
如图5所示,检测区域包括目标车辆1的车辆原图、目标车辆2的车辆原图以及目标车辆3的车牌原图,通过端元矩阵对车牌原图的高光谱图像进行正交投影,再根据得到的丰度估计检测结果进行判定目标像素点,以确定目标区域,图5中的车牌检测结果即分别显示了当车牌为伪造金属、伪造胶带、伪造塑料时的检测结果。
步骤S603:生成结果。
具体地,检测该检测区域是否存在目标区域,当检测到存在目标区域,生成第一检测结果;当检测到不存在目标区域,生成第二检测结果;其中,第一检测结果可以描述检测 区域存在目标区域以及目标区域的具体位置,第二检测结果可以描述检测区域不存在目标区域。举例来说,当检测到待检测车牌为伪造车牌,即存在伪造区域,第一检测结果可以是如图5的车牌检测结果图,该车牌检测图描述了该车牌的伪造位置以及伪造的材料,即目标车辆1为金属伪造、目标车辆2为胶带伪造、目标车辆3为塑料伪造,而当检测到待检测车牌为真实车牌,即不存在伪造区域,第二检测结果可以是“不存在伪造区域”或“车牌为真实车牌”等文字或图像或语音描述。
本申请实施例通过该图像检测方法,首先获取第一材料的图像的高光谱特征信息以及目标图像的高光谱特征信息,该第一材料可以为真实车牌材料,然后利用获取的真实车牌材料的图像的高光谱特征信息检验检测区域中是否存在目标区域,即是检验待检测车牌图像中是否有与真实车牌材料的高光谱特征信息不匹配的区域,如果有,则生成第一检测结果,如果没有,则生成第二检测结果。由于高光谱图像中不同物质的光谱不同,利用真实车牌的高光谱数据,来对待检测车牌进行对比检测,避免了传统图像检测只能通过特征识别的弊端,利用高光谱图像中不同物质光谱不同的优势,有效区分了真实车牌和伪造车牌。
在其中一个实现方式中,本申请实施例还包括在检测该检测区域中是否存在目标区域之前,获取目标图像中的检测区域的具体实现步骤,该步骤可以包括:
步骤S604:获取目标图像的高光谱图像。
具体地,电子设备获取目标图像的高光谱图像,该目标图像的高光谱图像包括检测区域,其中,电子设备可以从具有高光谱技术的摄像设备中获取目标图像的高光谱图像,也可以使用自身的具有高光谱技术的摄像设备直接获取目标图像的高光谱图像。举例来说,高光谱摄像设备拍摄到目标车辆的高光谱图像,若该高光谱图像中的目标对象图像区域清晰完整,则输出到电子设备上,电子设备获取到目标图像的高光谱图像。举例来说,如图3所示,示例性的,图3为目标图像的高光谱图像,目标图像中包括目标车辆1、目标车辆2和目标车辆3,以及目标车辆的车牌区域,在高光谱图像中,车牌区域为清晰完整的区域。
步骤S605:根据该目标图像的高光谱图像合成目标图像的RGB图像。
具体地,电子设备获取目标图像的高光谱图像后,提取高光谱图像中700nm、546nm以及435nm附近的波段合成RGB图像,该700nm、546nm以及435nm附近的波段分别为颜色为红绿蓝的波段,将高光谱图像合成为RGB图像便于进行图像分析,由于高光谱图像背景数据复杂、数据量庞大、光谱库构建难度高,这种方式可以在确定检测区域的过程中更简便高效。
步骤S606:提取该RGB图像中检测区域。
具体地,电子设备合成目标图像的RGB图像后,提取该RGB图像中检测区域。下面介绍两种提取该RGB图像中检测区域的方式:
方式一,通过将RGB图像输入到车牌检测算法中直接定位获得检测区域,即车牌图像区域。
方式二,可以先选取RGB图像中符合预设目标对象颜色范围的像素点为特定像素点,然后根据特定像素点确定检测区域。
具体地,电子设备根据该目标图像的高光谱数据合成RGB图像后,将RGB图像转换到HSV色彩空间,遍历RGB图像中的像素点,若所述像素点的色调H、饱和度S和亮度V在预设目标对象颜色范围内,则选取所述像素点为特定像素点,其中,预设目标对象颜色 范围为真实车牌的蓝色像素点的颜色范围,真实车牌的蓝色像素点的色调范围为0.56~0.71,饱和度范围为0.4~1,亮度范围为0.3~1,若像素点的色调H、饱和度S和亮度V均在该真实车牌的蓝色像素点的颜色范围内,则可以认为该像素点为真实车牌的蓝色像素点,即选取为特定像素点。选取到特定像素点后,其中特定像素点覆盖的范围包括检测区域,根据特定像素点确定检测区域,即所有特定像素点组成的区域为检测区域。
在其中一个实施方式中,电子设备通过遍历RGB图像中的所有像素点,选取到特定像素点,验证特定像素点所组成的各个连通区域是否符合预设矩形规格,若是,则确定该符合预设矩形规格的连通区域为检测区域。举例来说,首先根据特定像素点组成的所有连通区域确定连通区域的外接矩形,根据车牌的特定几何特征验证所有连通区域的外接矩形,根据国家标准,小汽车车牌的长宽比值为440:140,根据此值对所有连通区域的外接矩形进行比较,确定符合条件的矩形区域,并输出该矩形区域的坐标,根据坐标定位到目标图像的高光谱图像中,以提取待检测车牌(即检测区域),如图5中的目标车辆1的车牌原图“辽B 65PF7”、目标车辆2的车牌原图“粤D 6DAF7”以及目标车辆3的车牌原图“辽B 9D243”即为检测区域。这种方式通过对目标图像的RGB图像的像素点进行遍历选取特定像素点,获取由特定像素点构成的检测区域,能够有效提取RGB图像中检测区域。
在图6所描述的方法中,在检测该检测区域中是否存在目标区域之前,首先获取包含检测区域的目标图像,且该目标图像为高光谱图像,通过对目标图像的RGB图像的像素点进行遍历选取特定像素点,获取由特定像素点构成的检测区域,由于高光谱图像背景数据复杂、数据量庞大、光谱库构建难度高,这种方式通过先获取目标图像的高光谱图像,然后获取RGB图像以确定目标图像中检测区域,利用RGB图像在车牌定位上简便高效的优势,以及高光谱图像中不同物质光谱不同的优势提高了该方法的简便性和实用性。
上述主要从电子设备实施的方法的角度对本申请实施例提供的方案进行了介绍。可以理解的是,各个网元,例如电子设备、摄像设备等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的网元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对电子设备、摄像设备等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,请参见图7,图7是本申请实施例提供的一种图像检测装置70的结构示意图,该图像检测装置70可以包括第一获取单元701、第一检测单元702和第一生成单元703,其中,各个单元的详细描述如下:
第一获取单元701,用于获取目标图像,所述目标图像为根据特定波段进行补光而拍摄得到的图像,所述目标图像中包括检测区域;其中,第一材料和第二材料在所述特定波段 中的光谱反射率的差值大于第一阈值;
第一检测单元702,根据所述检测区域的目标像素点,检测所述检测区域中是否存在目标区域;所述目标像素点包括:灰度值不在第一预设范围内的像素点,或者灰度值在第二预设范围内的像素点,所述第一预设范围包括所述第一材料的灰度值范围,所述第二预设范围包括所述第二材料的灰度值范围;
第一生成单元703,用于当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
在一种可能的实现方式中,所述图像检测装置还可以包括采集单元704,其中,采集单元704,用于所述获取目标图像之前,通过高光谱摄像机采集所述第一材料的高光谱数据和所述第二材料的高光谱数据;确定单元,用于根据所述第一材料的高光谱数据和所述第二材料的高光谱数据确定所述特定波段的范围。
在一种可能的实现方式中,所述特定波段的范围包括550nm~700nm。
在一种可能的实现方式中,所述第一检测单元702具体包括:提取单元705,用于从所述目标图像中提取检测区域;所述第一检测单元702,还用于若所述检测区域的目标像素点覆盖的范围大于第二阈值,则检测出所述检测区域中存在目标区域;或者若所述检测区域的目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域。
需要说明的是,各个单元的实现还可以对应参照图2所示的方法实施例的相应描述,此处不再赘述。
请参见图8,图8是本申请实施例提供的另一种图像检测装置80的结构示意图,该图像检测装置80可以包括第二获取单元801、第二检测单元802和第二生成单元803,其中,各个单元的详细描述如下:
第二获取单元801,用于获取第一材料的图像的高光谱特征信息,以及获取目标图像的高光谱特征信息,所述高光谱特征信息描述了图像的光谱信息,所述目标图像中包括检测区域;
第二检测单元802,用于利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域,所述目标区域中包括:与所述第一材料的高光谱特征信息不匹配的区域;
第二生成单元803,用于当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
在一种可能的实现方式中,所述图像检测装置80还包括:所述第二获取单元801,还用于在所述利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域之前,获取第二材料的图像的高光谱特征信息;所述第二检测单元802,还用于利用所述第一材料和所述第二材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域。
在一种可能的实现方式中,所述装置还包括:组成单元804,用于根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;所述第二检测单元802包括:正交投影单元805,用于将所述检测区域的高光谱数据在所述第一光谱矩阵上进行正 交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;所述第二检测单元802,还用于若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
在一种可能的实现方式中,所述装置还包括:所述组成单元804,还用于根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像中的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;所述组成单元804,还用于根据所述第二材料的图像的高光谱数据,选取所述第二材料的图像中的目标像素点的光谱均值,组成第二光谱矩阵,所述第二材料的图像的高光谱特征信息包括所述第二光谱矩阵;所述第二检测单元802包括:所述组成单元804,还用于将所述第一光谱矩阵和所述第二光谱矩阵共同组成端元矩阵;所述正交投影单元805,还用于将所述检测区域的高光谱数据在所述端元矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;所述第二检测单元802,还用于若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
需要说明的是,各个单元的实现还可以对应参照图6所示的方法实施例的相应描述,此处不再赘述。
图9所示,为本申请的实施例提供的上述实施例中所涉及的电子设备的一种可能的硬件结构示意图。如图9所示,电子设备900可包括:一个或多个处理器901、一个或多个存储器902以及一个或多个通信接口903。这些部件可通过总线904或者其他方式连接,图9以通过总线连接为例。其中:
通信接口903可用于电子设备900与其他通信设备,例如其他电子设备,进行通信。具体的,通信接口903可以是有线接口。
存储器902可以和处理器901通过总线904或者输入输出端口耦合,存储器902也可以与处理器901集成在一起。存储器902用于存储各种软件程序和/或多组指令或者数据。具体的,存储器902可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。,存储器902可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器902可以存储操作系统(下述简称系统),例如uCOS、VxWorks、RTLinux等嵌入式操作系统。存储器902还 可以存储网络通信程序,该网络通信程序可用于与一个或多个附加设备,一个或多个用户设备,一个或多个电子设备进行通信。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器902用于存储执行以上方案的应用程序代码,并由处理器901来控制执行。所述处理器901用于执行所述存储器902中存储的应用程序代码。
处理器901可以是中央处理器单元,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现确定功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。
本申请实施例中,处理器901可用于读取和执行计算机可读指令。具体的,处理器901可用于调用存储于存储器902中的程序,例如本申请的一个或多个实施例提供的图像检测方法在电子设备900侧的实现程序,并执行该程序包含的指令。
可以理解的,电子设备900可以是图1示出的图像检测方法的系统中的电子设备103,可实施为一个基本服务集(BSS)、一个扩展服务集(ESS)、移动手机或电脑终端等等。
需要说明的是,图9所示的电子设备900仅仅是本申请实施例的一种实现方式,实际应用中,电子设备900还可以包括更多或更少的部件,这里不作限制。关于电子设备900的具体实现可以参考前述图3或图5所示方法实施例中的相关描述,此处不再赘述。
参见图10,图10示出了本申请提供的一种摄像机的结构示意图。如图10所示,该摄像机1000包括补光灯1001和摄像模块1002,其中,所述补光灯1001用于产生特定波段的补偿光,其中,第一材料和第二材料在所述特定波段下的光谱反射率的差值大于第一阈值;所述摄像模块1002用于基于所述特定波段拍摄并获取目标图像。该摄像机还可以包括存储器,存储器用于与处理器耦合,其保存该摄像机必要的程序指令和数据。该摄像机还可以包括通信接口,用于与其他设备或通信网络通信。
参见图11,图11示出了本申请提供的一种可能的装置的结构示意图。如图11所示,装置1100可包括:处理器1101、总线1103以及耦合于处理器1101的一个或多个接口1102。其中:
处理器1101可用于读取和执行计算机可读指令。具体实现中,处理器1101可主要包括控制器、运算器和寄存器。其中,控制器主要负责指令译码,并为指令对应的操作发出控制信号。运算器主要负责执行定点或浮点算数运算操作、移位操作以及逻辑操作等,也可以执行地址运算和转换。寄存器主要负责保存指令执行过程中临时存放的寄存器操作数和中间操作结果等。具体实现中,处理器1101的硬件架构可以是专用集成电路(application specific integrated circuits,ASIC)架构、无互锁管道阶段架构的微处理器(microprocessor without interlocked piped stages architecture,MIPS)架构、进阶精简指令集机器(advanced RISC machines,ARM)架构或者NP架构等等。处理器1101可以是单核的,也可以是多核的。
接口1102可用于输入待处理的数据至处理器1101,并且可以向外输出处理器1101的处理结果。具体实现中,接口1102可以是通用输入输出(general purpose input output,GPIO)接口,可以和多个外围设备(如显示器(LCD)、摄像头(camara)等等)连接。接口1102 可以通过总线1103与处理器1101相连。其中,显示器和摄像头可以和处理器1101集成在一起,这种情况下,显示器和摄像头是作为装置1100的一部分。
本申请中,处理器1101可用于从存储器中调用本申请的一个或多个实施例提供的资源预留方法在网络设备或者用户设备侧的实现程序,并执行该程序包含的指令。存储器可以和处理器1101集成在一起,这种情况下,存储器是作为装置1100的一部分。或者,存储器作为装置1100外部的元件,处理器1101通过接口1102调用存储器中存储的指令或数据。
接口1102可用于输出处理器1101的执行结果。关于本申请的一个或多个实施例提供的资源预留方法可参考前述各个实施例,这里不再赘述。
上述装置1100可以是通信芯片或者系统芯片(System on a Chip,SoC)。
需要说明的,处理器1101、接口1102各自对应的功能既可以通过硬件设计实现,也可以通过软件设计来实现,还可以通过软硬件结合的方式来实现,这里不作限制。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可能可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以为个人计算机、服务端或者网络设备等,具体可以是计算机设备中的处理器)执行本申请各个实施例上述方法的全部或部分步骤。其中,而前述的存储介质可包括:U盘、移动硬盘、磁碟、光盘、只读存储器(Read-Only Memory,缩写:ROM)或者随机存取存储器(Random Access Memory,缩写:RAM)等各种可以存储程序代码的介质。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (20)

  1. 一种图像检测方法,其特征在于,包括:
    获取目标图像,所述目标图像为根据特定波段进行补光而拍摄得到的图像,所述目标图像中包括检测区域;其中,第一材料和第二材料在所述特定波段中的光谱反射率的差值大于第一阈值;
    根据所述检测区域中的目标像素点,检测所述检测区域中是否存在目标区域;所述目标像素点包括:灰度值不在第一预设范围内的像素点,或者灰度值在第二预设范围内的像素点,所述第一预设范围包括所述第一材料的灰度值范围,所述第二预设范围包括所述第二材料的灰度值范围;
    当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
  2. 根据权利要求1所述的方法,其特征在于,所述获取目标图像之前,还包括:
    通过高光谱摄像机采集所述第一材料的高光谱数据和所述第二材料的高光谱数据;
    根据所述第一材料的高光谱数据和所述第二材料的高光谱数据确定所述特定波段的范围。
  3. 根据权利要求1所述的方法,其特征在于,所述特定波段范围包括550nm~700nm。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述检测区域中的目标像素点,检测所述检测区域中是否存在目标区域包括:
    从所述目标图像中提取检测区域;
    若所述检测区域的目标像素点覆盖的范围大于第二阈值,则检测出所述检测区域中存在目标区域;或者
    若所述检测区域的目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述目标区域中存在目标区域。
  5. 一种图像检测方法,其特征在于,包括:
    获取第一材料的图像的高光谱特征信息,以及获取目标图像的高光谱特征信息,所述目标图像中包括检测区域;
    利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域,所述目标区域中包括:与所述第一材料的高光谱特征信息不匹配的区域;
    当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
  6. 根据权利要求5所述的方法,其特征在于,所述利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域之前,还包括:获取第二材料的图像的高光谱特征信息;
    利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区 域,包括:
    利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域;
    以及利用所述第二材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域。
  7. 根据权利要求5所述的方法,其特征在于,所述获取第一材料的图像的高光谱特征信息包括:
    根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;
    所述利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域包括:将所述检测区域的高光谱数据在所述第一光谱矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;
    若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
  8. 根据权利要求6所述的方法,其特征在于,所述获取第一材料的图像的高光谱特征信息包括:
    根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像中的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;
    所述获取第二材料的图像的高光谱特征信息包括:
    根据所述第二材料的图像的高光谱数据,选取所述第二材料的图像中的目标像素点的光谱均值,组成第二光谱矩阵,所述第二材料的图像的高光谱特征信息包括所述第二光谱矩阵;
    所述利用所述第一材料和所述第二材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域包括:
    将所述第一光谱矩阵和所述第二光谱矩阵共同组成端元矩阵;
    将所述检测区域的高光谱数据在所述端元矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;
    若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
  9. 一种图像检测装置,其特征在于,包括:
    第一获取单元,用于获取目标图像,所述目标图像为根据特定波段进行补光而拍摄得到的图像,所述目标图像中包括检测区域;其中,第一材料和第二材料在所述特定波段中的光谱反射率的差值大于第一阈值;
    第一检测单元,根据所述检测区域中的目标像素点,检测所述检测区域中是否存在目标区域;所述目标像素点包括:灰度值不在第一预设范围内的像素点,或者灰度值在第二预设范围内的像素点,所述第一预设范围包括所述第一材料的灰度值范围,所述第二预设范围包括所述第二材料的灰度值范围;
    第一生成单元,用于当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    采集单元,用于所述获取目标图像之前,通过高光谱摄像机采集所述第一材料的高光谱数据和所述第二材料的高光谱数据;
    确定单元,用于根据所述第一材料的高光谱数据和所述第二材料的高光谱数据确定所述特定波段的范围。
  11. 根据权利要求9所述的装置,其特征在于,所述特定波段范围包括550nm~700nm。
  12. 根据权利要求9所述的装置,其特征在于,所述第一检测单元具体包括:
    提取单元,用于从所述目标图像中提取检测区域;
    所述第一检测单元,还用于若所述检测区域的目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者若所述检测区域的目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域。
  13. 一种图像检测装置,其特征在于,包括:
    第二获取单元,用于获取第一材料的图像的高光谱特征信息,以及获取目标图像的高光谱特征信息,所述高光谱特征信息描述了图像的光谱信息,所述目标图像中包括检测区域;
    第二检测单元,用于利用所述第一材料的图像的高光谱特征信息检测所述目标图像中检测区域中是否存在目标区域,所述目标区域中包括:与所述第一材料的高光谱特征信息不匹配的区域;
    第二生成单元,用于当检测到存在所述目标区域,生成第一检测结果;当检测到不存在所述目标区域,生成第二检测结果。
  14. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    所述第二获取单元,还用于在所述利用所述第一材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域之前,获取第二材料的图像的高光谱特征信息;
    所述第二检测单元,还用于利用所述第一材料和所述第二材料的图像的高光谱特征信息检测所述检测区域中是否存在目标区域。
  15. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    组成单元,用于根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;
    所述第二检测单元包括:
    正交投影单元,用于将所述检测区域的高光谱数据在所述第一光谱矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;
    所述第二检测单元,还用于若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
  16. 根据权利要求14所述的装置,其特征在于,所述装置还包括:
    所述组成单元,还用于根据所述第一材料的图像的高光谱数据,选取所述第一材料的图像中的目标像素点的光谱均值,组成第一光谱矩阵,所述第一材料的图像的高光谱特征信息包括所述第一光谱矩阵;
    所述组成单元,还用于根据所述第二材料的图像的高光谱数据,选取所述第二材料的图像中的目标像素点的光谱均值,组成第二光谱矩阵,所述第二材料的图像的高光谱特征信息包括所述第二光谱矩阵;
    所述第二检测单元包括:
    所述组成单元,还用于将所述第一光谱矩阵和所述第二光谱矩阵共同组成端元矩阵;
    所述正交投影单元,还用于将所述检测区域的高光谱数据在所述端元矩阵上进行正交投影,以得到丰度估计检测结果,所述丰度估计检测结果包括所述检测区域中每个像素点为所述第一材料的概率的描述;
    所述第二检测单元,还用于若目标像素点覆盖的面积大于第二阈值,则检测出所述检测区域中存在目标区域;或者,若所述目标像素点覆盖的区域范围与所述检测区域中字符区域范围的重叠面积大于第三阈值,则检测出所述检测区域中存在目标区域;所述目标像素点包括所述丰度估计检测结果中所述概率符合预设条件的像素点。
  17. 一种终端设备,其特征在于,包括处理器以及通信接口,其中,所述处理器用于调用存储的图像检测程序代码来执行权利要求1至10任一项所述的方法。
  18. 一种摄像机,其特征在于,包括补光灯和摄像模块,其中,所述补光灯用于产生特定波段的补偿光,其中,第一材料和第二材料在所述特定波段下的光谱反射率的差值大于第一阈值;所述摄像模块用于基于所述特定波段拍摄并获取目标图像。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述权利要求1-10任意一项所述的方法。
  20. 一种芯片系统,其特征在于,所述芯片系统包括至少一个处理器,存储器和接口电路,所述存储器、所述接口电路和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有指令;所述指令被所述处理器执行时,权利要求1-10中任意一项所述的方法得以实现。
PCT/CN2020/083507 2019-08-23 2020-04-07 一种图像检测方法及相关设备 WO2021036267A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910788709.5A CN112417934B (zh) 2019-08-23 2019-08-23 一种图像检测方法及相关设备
CN201910788709.5 2019-08-23

Publications (1)

Publication Number Publication Date
WO2021036267A1 true WO2021036267A1 (zh) 2021-03-04

Family

ID=74685415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/083507 WO2021036267A1 (zh) 2019-08-23 2020-04-07 一种图像检测方法及相关设备

Country Status (2)

Country Link
CN (1) CN112417934B (zh)
WO (1) WO2021036267A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837538A (zh) * 2021-03-27 2021-05-25 深圳市迅朗科技有限公司 一种车牌云识别相机、图像传感器组件及补光、保洁方法
CN113222908A (zh) * 2021-04-23 2021-08-06 中国科学院西安光学精密机械研究所 基于自适应光谱波段筛选网络的高光谱遮挡效果评估方法
CN113609907A (zh) * 2021-07-01 2021-11-05 奥比中光科技集团股份有限公司 一种多光谱数据的获取方法、装置及设备
CN114166805A (zh) * 2021-11-03 2022-03-11 格力电器(合肥)有限公司 Ntc温度传感器检测方法、装置、ntc温度传感器及制造方法
CN114882100A (zh) * 2022-05-10 2022-08-09 北京师范大学 一种基于亚像元制图的土地覆盖面积估算方法及系统
US20230237686A1 (en) * 2022-01-23 2023-07-27 Nicholas Robert Spiker Automated color calibration system for optical devices
CN117746220A (zh) * 2023-12-18 2024-03-22 广东安快智能科技有限公司 智能道闸真伪车牌的识别检测方法、装置、设备以及介质
CN118135205A (zh) * 2024-05-06 2024-06-04 南京信息工程大学 一种高光谱图像异常检测方法
CN118470440A (zh) * 2024-07-10 2024-08-09 山东大学 一种基于深度学习与高光谱图像的肿瘤早期识别系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116246053A (zh) * 2022-04-08 2023-06-09 辽宁警察学院 基于两阶段连续拍照补光的车辆监控系统图像采集方法
CN115546837B (zh) * 2022-10-16 2023-06-23 三峡大学 变电站进出综合管理系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070036200A1 (en) * 2005-08-09 2007-02-15 United States Of America As Represented By The Dept Of The Army Variable emittance surfaces
CN101806898A (zh) * 2010-03-19 2010-08-18 武汉大学 基于端元可变的高光谱遥感影像目标探测方法
CN101807301A (zh) * 2010-03-17 2010-08-18 北京航空航天大学 一种基于高阶统计量的高光谱图像目标检测方法
CN102156981A (zh) * 2011-03-10 2011-08-17 北京航空航天大学 一种基于正则化高阶统计量的高光谱空间多目标检测方法
CN104881632A (zh) * 2015-04-28 2015-09-02 南京邮电大学 高光谱人脸识别方法
CN107402070A (zh) * 2017-06-02 2017-11-28 皑高森德医疗技术(北京)有限责任公司 一种皮肤高光谱图像采集单元及标定方法
CN108073895A (zh) * 2017-11-22 2018-05-25 杭州电子科技大学 一种基于解混预处理的高光谱目标检测方法
CN110363186A (zh) * 2019-08-20 2019-10-22 四川九洲电器集团有限责任公司 一种异常检测方法、装置及计算机存储介质、电子设备
CN111311696A (zh) * 2020-02-12 2020-06-19 大连海事大学 一种基于高光谱解混技术的车牌真伪检测方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070036200A1 (en) * 2005-08-09 2007-02-15 United States Of America As Represented By The Dept Of The Army Variable emittance surfaces
CN101807301A (zh) * 2010-03-17 2010-08-18 北京航空航天大学 一种基于高阶统计量的高光谱图像目标检测方法
CN101806898A (zh) * 2010-03-19 2010-08-18 武汉大学 基于端元可变的高光谱遥感影像目标探测方法
CN102156981A (zh) * 2011-03-10 2011-08-17 北京航空航天大学 一种基于正则化高阶统计量的高光谱空间多目标检测方法
CN104881632A (zh) * 2015-04-28 2015-09-02 南京邮电大学 高光谱人脸识别方法
CN107402070A (zh) * 2017-06-02 2017-11-28 皑高森德医疗技术(北京)有限责任公司 一种皮肤高光谱图像采集单元及标定方法
CN108073895A (zh) * 2017-11-22 2018-05-25 杭州电子科技大学 一种基于解混预处理的高光谱目标检测方法
CN110363186A (zh) * 2019-08-20 2019-10-22 四川九洲电器集团有限责任公司 一种异常检测方法、装置及计算机存储介质、电子设备
CN111311696A (zh) * 2020-02-12 2020-06-19 大连海事大学 一种基于高光谱解混技术的车牌真伪检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HE ZI-JIAN; SHI JIA-MING; WANG JIA-CHUN; ZHAO DA-PENG; WANG QI-CHAO; GAN GUI-HUA: "Recognition of Camouflaged Target by Hyperspectral Imaging System Based on Acousto- optic Tunable Filter", LASER & INFRARED, vol. 44, no. 7, 31 July 2014 (2014-07-31), XP009526360, ISSN: 1001-5078, DOI: 10.3969/j.issn.1001-5078.2014.07.019 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837538A (zh) * 2021-03-27 2021-05-25 深圳市迅朗科技有限公司 一种车牌云识别相机、图像传感器组件及补光、保洁方法
CN112837538B (zh) * 2021-03-27 2023-12-22 深圳市迅朗科技有限公司 一种车牌云识别相机及补光方法
CN113222908B (zh) * 2021-04-23 2023-12-12 中国科学院西安光学精密机械研究所 基于自适应光谱波段筛选网络的高光谱遮挡效果评估方法
CN113222908A (zh) * 2021-04-23 2021-08-06 中国科学院西安光学精密机械研究所 基于自适应光谱波段筛选网络的高光谱遮挡效果评估方法
CN113609907A (zh) * 2021-07-01 2021-11-05 奥比中光科技集团股份有限公司 一种多光谱数据的获取方法、装置及设备
CN113609907B (zh) * 2021-07-01 2024-03-12 奥比中光科技集团股份有限公司 一种多光谱数据的获取方法、装置及设备
CN114166805A (zh) * 2021-11-03 2022-03-11 格力电器(合肥)有限公司 Ntc温度传感器检测方法、装置、ntc温度传感器及制造方法
CN114166805B (zh) * 2021-11-03 2024-01-30 格力电器(合肥)有限公司 Ntc温度传感器检测方法、装置、ntc温度传感器及制造方法
US20230237686A1 (en) * 2022-01-23 2023-07-27 Nicholas Robert Spiker Automated color calibration system for optical devices
US11893758B2 (en) * 2022-01-23 2024-02-06 Verichrome Automated color calibration system for optical devices
CN114882100A (zh) * 2022-05-10 2022-08-09 北京师范大学 一种基于亚像元制图的土地覆盖面积估算方法及系统
CN117746220A (zh) * 2023-12-18 2024-03-22 广东安快智能科技有限公司 智能道闸真伪车牌的识别检测方法、装置、设备以及介质
CN118135205A (zh) * 2024-05-06 2024-06-04 南京信息工程大学 一种高光谱图像异常检测方法
CN118470440A (zh) * 2024-07-10 2024-08-09 山东大学 一种基于深度学习与高光谱图像的肿瘤早期识别系统

Also Published As

Publication number Publication date
CN112417934B (zh) 2024-05-14
CN112417934A (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
WO2021036267A1 (zh) 一种图像检测方法及相关设备
CN111161205B (zh) 一种图像处理和人脸图像识别方法、装置及设备
US7375745B2 (en) Method for digital image stitching and apparatus for performing the same
EP2833288B1 (en) Face calibration method and system, and computer storage medium
Ajmal et al. A comparison of RGB and HSV colour spaces for visual attention models
US9412164B2 (en) Apparatus and methods for imaging system calibration
US8526731B2 (en) Hough transform method for linear ribbon and circular ring detection in the gradient domain
Ren et al. Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection
US20150009226A1 (en) Color chart detection apparatus, color chart detection method, and color chart detection computer program
CN108154149B (zh) 基于深度学习网络共享的车牌识别方法
WO2018068304A1 (zh) 一种图像匹配的方法及装置
US11348248B1 (en) Automatic image cropping systems and methods
US9064178B2 (en) Edge detection apparatus, program and method for edge detection
WO2018147059A1 (ja) 画像処理装置、および画像処理方法、並びにプログラム
US11620759B2 (en) Systems and methods for machine learning enhanced image registration
CN108052976A (zh) 一种多波段图像融合识别方法
CN111539311A (zh) 基于ir和rgb双摄的活体判别方法、装置及系统
JP5201184B2 (ja) 画像処理装置及びプログラム
CN110909568A (zh) 用于面部识别的图像检测方法、装置、电子设备及介质
US11275952B2 (en) Monitoring method, apparatus and system, electronic device, and computer readable storage medium
CN106402717B (zh) 一种ar播放控制方法及智能台灯
CN112907593B (zh) 一种手机镜片胶体断层位置的识别方法、装置及相关设备
CN210627230U (zh) 一种人脸识别设备
US8538142B2 (en) Face-detection processing methods, image processing devices, and articles of manufacture
CN107633498A (zh) 图像暗态增强方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20857428

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20857428

Country of ref document: EP

Kind code of ref document: A1