CN110335273B - Detection method, detection device, electronic apparatus, and medium - Google Patents

Detection method, detection device, electronic apparatus, and medium Download PDF

Info

Publication number
CN110335273B
CN110335273B CN201910636959.7A CN201910636959A CN110335273B CN 110335273 B CN110335273 B CN 110335273B CN 201910636959 A CN201910636959 A CN 201910636959A CN 110335273 B CN110335273 B CN 110335273B
Authority
CN
China
Prior art keywords
image
detected
region
connected domain
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910636959.7A
Other languages
Chinese (zh)
Other versions
CN110335273A (en
Inventor
蔡禹丞
仇晓松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN201910636959.7A priority Critical patent/CN110335273B/en
Publication of CN110335273A publication Critical patent/CN110335273A/en
Application granted granted Critical
Publication of CN110335273B publication Critical patent/CN110335273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a detection method, a detection apparatus, an electronic device, and a medium, the detection method including: acquiring an image, wherein the image comprises an object to be detected; acquiring a to-be-detected region image including the to-be-detected object in the image, wherein the area of the to-be-detected region image is smaller than that of the image; performing connected domain analysis and shape analysis on the image of the area to be detected, and determining the object position of the object to be detected in the image; determining and outputting object state information at the object location.

Description

Detection method, detection device, electronic apparatus, and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a detection method, a detection apparatus, an electronic device, and a medium.
Background
Electronic devices are increasingly used in many fields such as industrial and agricultural production, construction, logistics, and daily life. An electronic device generally has a plurality of status indicator lamps to indicate a current status of the electronic device, and the current status of each status indicator lamp may be determined by automatically detecting an image of the status indicator lamp of the electronic device.
In the course of implementing the disclosed concept, the inventors found that there are at least the following problems in the prior art: when utilizing image detection status indicator among the prior art, can obtain the little regional screenshot of pilot lamp department through techniques such as mark, then carry out status indicator's state detection to little regional screenshot again, and can not directly carry out pilot lamp location and detection to whole equipment image directly, detection efficiency is not high.
Disclosure of Invention
In view of the above, the present disclosure provides a detection method, a detection apparatus, an electronic device, and a medium, which can directly perform indicator lamp positioning and detection on an entire device image.
One aspect of the present disclosure provides a detection method, which may include the following operations of, first, obtaining an image including an object to be detected, then, obtaining a to-be-detected region image including the object to be detected in the image, where an area of the to-be-detected region image is smaller than an area of the image, then, performing connected domain analysis and shape analysis on the to-be-detected region image, determining an object position of the object to be detected in the image, and then, determining and outputting object state information at the object position.
According to the detection method provided by the embodiment of the disclosure, the image of the region to be detected including the object to be detected in the image is acquired, and then the connected domain analysis and the shape analysis are performed, so that the influence factors such as different electronic equipment styles and interference of different light rays can be overcome, the object position of the object to be detected in the image can be accurately determined, and further the object state information of the object position in the image can be output.
According to an embodiment of the present disclosure, the acquiring the image of the region to be detected including the object to be detected in the image may include, first, converting the image into a first color space to acquire luminance parameters of pixels in the image, then, acquiring a first region image in which values of the luminance parameters of the pixels in the image are all smaller than a first luminance threshold, then, converting the first region image into a second color space to acquire color parameters of the pixels in the first region image, and then, acquiring a second region image including a specified color parameter from the first region image based on the value of the color parameter of the first region image. The method and the device have the advantages that the original images are screened by using the preset threshold range of various color spaces, such as Red Green Blue (RGB) color space and Hue-brightness-Saturation (HLS) color space, the images of the areas to be detected including the objects with lower brightness can be automatically and accurately screened from the images, manual calibration is not needed, and the convenience and the accuracy of detection are improved.
According to the embodiment of the disclosure, the method may further include an operation of, after the image is converted into the first color space, acquiring a third region image in which a value of a luminance parameter of a pixel in the image is greater than a second luminance threshold, where the second luminance threshold is greater than the first luminance threshold, so that the region image to be detected including the object with higher luminance may be automatically and accurately screened out from the image, and manual calibration is not required during the process, thereby improving convenience and accuracy of detection. Then, the second region image and the third region image are taken as the region image to be detected. Therefore, the images of the objects with higher brightness and lower brightness can be extracted from one image, if the images of the indicator lamp in the lighting state and the indicator lamp in the extinguishing state are extracted, the objects to be detected cannot be omitted, and the detection automation degree and accuracy of the objects to be detected are improved.
According to the embodiment of the present disclosure, the method may further include smoothing the image of the second region and/or the image of the third region. Because some rough edges and small noise points exist in the image of the region to be detected screened out through the color space, the contour of an object in the image can be smoothed through the ways of firstly performing open operation and then performing closed operation.
According to an embodiment of the disclosure, the performing connected domain analysis and shape analysis on the image of the region to be detected and determining the object position of the object to be detected in the image may include performing binarization processing on the image of the region to be detected to obtain a binarized image of the region to be detected, performing connected domain analysis on the binarized image of the region to be detected to obtain a connected domain, performing edge detection on the connected domain to obtain an edge of the connected domain, obtaining a connected domain to be selected having an edge of a preset shape, determining the object to be detected from the connected domain to be selected based on the size of the connected domain, the distance between the connected domains and the filling state of the connected domain, and then obtaining the position of the object to be detected in the image. In addition to the connected domain of the object to be detected, other connected domains may be screened out, and the connected domains may be screened again based on the conditions of the size, shape, distance, filling state and the like of the connected domains to obtain the accurate connected domain of the object to be detected, and further obtain the position of the object to be detected in the image.
According to an embodiment of the present disclosure, the determining and outputting the object state information at the object position may include firstly sorting the objects based on coordinates of object positions of the objects to be detected in the image, and then sequentially outputting the state information at the coordinates of the object positions in the image. Therefore, the state information of the object to be detected in the image can be output according to a certain sequence.
According to an embodiment of the present disclosure, the method may further include an operation of first acquiring normal state information of an object in a normal operating state of the electronic device, and then determining the operating state of the electronic device based on the normal state information of the object and state information at coordinates of the position of the object in the image. Therefore, after an image comprising the electronic equipment is acquired, the current working state of the electronic equipment can be determined directly according to the image and the normal state information of the object of the electronic equipment in the normal working state, and the automation degree and convenience of state detection of the electronic equipment are greatly improved.
According to an embodiment of the present disclosure, the method may further include an operation of reducing a size of the image after acquiring the image. Because the detection method provided by the disclosure is not influenced by the size of the image, the reduced device image can be directly detected, the step of intercepting the picture of the indicator lamp area is avoided, and meanwhile, the image operation amount is reduced due to the reduction of the size of the image, and the detection speed is favorably accelerated.
Another aspect of the present disclosure provides a detection apparatus, which may include an image acquisition module, a region image acquisition module, an analysis module, and an output module, where the image acquisition module is configured to acquire an image including an object to be detected, the region image acquisition module is configured to acquire a region image to be detected including the object to be detected in the image, an area of the region image to be detected is smaller than an area of the image, the analysis module is configured to perform connected domain analysis and shape analysis on the region image to be detected, and determine an object position of the object to be detected in the image, and the output module is configured to determine and output object state information at the object position.
According to an embodiment of the present disclosure, the region image acquiring module may include a first converting unit configured to convert the image into a first color space to acquire luminance parameters of pixels in the image, a first region acquiring unit configured to acquire a first region image in which values of the luminance parameters of the pixels in the image are all smaller than a first luminance threshold, a second converting unit configured to convert the first region image into a second color space to acquire color parameters of the pixels in the first region image, and a second region acquiring unit configured to acquire a second region image including a specified color parameter from the first region image based on the value of the color parameter of the first region image.
According to an embodiment of the present disclosure, the region image acquiring module may further include a third region acquiring unit and a region image to be detected acquiring unit, where the third region acquiring unit is configured to acquire a third region image in which a value of a luminance parameter of a pixel in the image is greater than a second luminance threshold after converting the image into the first color space, the second luminance threshold is greater than the first luminance threshold, and the region image to be detected acquiring unit is configured to take the second region image and the third region image as the region image to be detected.
According to an embodiment of the present disclosure, the region image obtaining module may further include a smoothing unit, where the smoothing unit is configured to smooth the image of the second region and/or the image of the third region.
According to an embodiment of the present disclosure, the analysis module may include a binarization processing unit, a connected domain analysis unit, an edge detection unit, a connected domain to be selected acquisition unit, an object to be detected determination unit, and a position acquisition unit, wherein the binarization processing unit is configured to perform binarization processing on the region image to be detected to obtain a binarized region image to be detected, the connected domain analysis unit is configured to perform connected domain analysis on the binarized region image to be detected to obtain a connected domain, the edge detection unit is configured to perform edge detection on the connected domain to obtain an edge of the connected domain, the connected domain to be selected acquisition unit is configured to acquire a connected domain to be selected having an edge of a preset shape, the object to be detected determination unit is configured to determine the object to be detected from the connected domain to be selected based on a size of the connected domain, a distance between the connected domains, and a filling state of the connected domain, the position acquisition unit is used for acquiring the position of the object to be detected in the image.
According to an embodiment of the present disclosure, the output module may include a sorting unit configured to sort the objects based on coordinates of object positions of the objects to be detected in the image, and an output unit configured to sequentially output state information at the coordinates of the object positions in the image.
According to an embodiment of the present disclosure, the output module may further include a device state information acquiring unit configured to acquire normal state information of an object in a normal operating state of the electronic device, and a device state determining unit configured to determine an operating state of the electronic device based on the normal state information of the object and the state information at the coordinates of the position of the object in the image.
According to an embodiment of the present disclosure, the apparatus may further include an image reduction module for reducing a size of the image after the image is acquired.
Another aspect of the present disclosure provides an electronic device comprising one or more processors and a storage, wherein the storage is configured to store executable instructions that, when executed by the processors, implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1A schematically illustrates an application scenario of a detection method, a detection apparatus, an electronic device and a medium according to an embodiment of the present disclosure;
FIG. 1B schematically illustrates a system architecture diagram of a suitable detection method, detection apparatus, according to an embodiment of the disclosure;
FIG. 2 schematically shows a flow chart of a detection method according to an embodiment of the present disclosure;
fig. 3A schematically illustrates a flowchart of acquiring an image of a region to be detected including the object to be detected in an image according to an embodiment of the present disclosure;
fig. 3B schematically illustrates a schematic diagram of acquiring an image of a region to be detected according to an embodiment of the present disclosure;
fig. 3C schematically shows a flowchart for determining an object position of an object to be detected in the image according to an embodiment of the present disclosure;
fig. 3D schematically illustrates a schematic diagram of determining an object to be detected according to an embodiment of the present disclosure;
FIG. 3E schematically illustrates a schematic diagram of determining a position of an object according to an embodiment of the disclosure;
FIG. 4 schematically shows a block diagram of a detection apparatus according to an embodiment of the present disclosure; and
fig. 5 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features.
The following can be implemented for monitoring the electronic device to obtain the current status of the status indicator lamp. For example, the status indicator lamps in the image are calibrated by manual calibration, and then the image of the calibration point is analyzed to obtain the status of the status indicator lamps, but this method relies on manual calibration, and when the size of the image changes, the positions of the calibrated status indicator lamps change, and each electronic device needs to be calibrated separately, which is not convenient for popularization. For another example, a circle detection method is used to directly detect a circular object in an image, define the position of the indicator light, and determine the color and brightness. However, this method requires that the image of the status indicator light in the image is clearer, and the area of the indicator light occupies a larger proportion in the image, otherwise, redundant circular frames are detected, and false detection occurs. In addition, the luminance values and the color values corresponding to different illumination conditions may be different, which may affect the final detection result. For another example, color channels are screened through an RGB color space to obtain indicator lights with different colors, then circular detection is performed to determine the position of the status indicator light, and finally the status of the status indicator light is determined according to the brightness value. In addition, the brightness value is directly judged after the circle detection, and the influence of factors such as light spots or other similar color objects cannot be avoided.
As described above, in order to ensure the accuracy of the detection result, a small-area screenshot at the status indicator lamp is usually obtained by a technique such as calibration, and then the status of the status indicator lamp is determined according to the small-area screenshot, so that the status indicator lamp cannot be directly positioned and status detected on the image of the whole electronic device, and the user experience is poor.
The embodiment of the disclosure provides a detection method, a detection device, an electronic device and a medium. The detection method comprises a position determination process and a state output process. In the position determining process, firstly, an image of a region to be detected including an object to be detected is obtained, then, connected domain analysis and shape analysis are carried out on the image of the region to be detected, and the object position of the object to be detected in the image is determined. After the position determination process is completed, a state output process is entered, and the object state information at the object position is determined and output.
Fig. 1A schematically illustrates an application scenario of a detection method, a detection apparatus, an electronic device, and a medium according to an embodiment of the present disclosure.
As shown in FIG. 1A, the XX electronic device has three status indicators, and the user can determine the status of the XX electronic device by observing the statuses of the three status indicators, such as whether to power up, whether to run, whether to perform a dump, etc. Of course, the electronic device may also be various electronic devices with more status indicator lights, such as an automobile instrument, a smart home, a production device, and the like, and may have a plurality of status indicator lights, which is not limited herein. In order to facilitate the user to detect the status of the status indicator lamps, if the user needs to monitor the status of 5, 10 or 30 electronic devices and their status indicator lamps at the same time, the existing manner of viewing the status of each indicator lamp by the user cannot meet the user's requirement. As shown in fig. 1A, by analyzing and detecting the image including the status indicator, the status of each status indicator of the XX electronic device can be automatically obtained, and further, the status of the XX electronic device can be determined according to the status of each status indicator.
FIG. 1B schematically shows a system architecture diagram of a suitable detection method, detection apparatus, according to an embodiment of the disclosure. It should be noted that fig. 1B is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1B, the system architecture 100 according to the embodiment may include terminal devices 101, 102, 103, a network 104, a server 105, and devices 106, 107 having an image capturing function. The network 104 is used to provide a medium for communication links between the devices 106, 107 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user can use the terminal devices 101, 102, 103 to perform production, daily use, etc., the terminal devices 101, 102, 103 may have a plurality of status indicator lights, etc., and during operation, the status indicator lights may give various status indications. The terminal devices 101, 102, 103 may be various electronic devices having objects to be detected, such as status indicators, including but not limited to production devices such as machine tools, office devices such as printed products, information display devices such as instrument panels, servers, test devices, and the like.
The server 105 may be a variety of computing devices capable of processing and analyzing images, such as a desktop, blade server, or the like. In addition, the server may also have a display to facilitate test results and the like. Of course, the server may send the test results to the client to facilitate the user to view the status of the status indicator lights of the terminal devices 101, 102, 103.
It should be understood that the number of terminal devices, networks, devices having an image capturing function, and servers are merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for the reality.
Fig. 2 schematically shows a flow chart of a communication method for each of a plurality of robots according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S207.
In operation S201, an image including an object to be detected is acquired.
In this embodiment, the image may be captured by a camera of the electronic device or the like, or may be a received image transmitted by another electronic device. The object to be detected can be an object with any one of attributes of a designated color, a designated brightness and a designated shape, such as a status indicator lamp, an indication image displayed by an electronic device and the like. When the image is collected, light can be supplemented to the object to be collected.
For example, an image may be captured by a camera, and the captured image may include an image captured under dim conditions and an image captured under fill light conditions. The dim condition can refer to only shoot the image when opening the dome lamp in the house, and the light filling condition can refer to opening the light filling lamp to the object to be detected when shooting and carrying out the light filling. However, the light-reflecting region may be generated on the object to be detected (or the electronic device having the object to be detected) by capturing the image under the light-supplementing condition, which affects the detection result. In this case, the image may be captured under the upper and lower fill-in light conditions. The lower light supplement lamp is arranged below the object to be detected, and is turned on when the image is to be shot, if the instruction of collecting the image is received, the light supplement lamp is controlled to work, or the light supplement lamp is turned on according to the period control of collecting the image, and the limitation is not made here.
It should be noted that, after the image is acquired, the method may further include the following operations: the size of the image may be reduced, for example, 3/4, 1/2, 1/4, 1/6, etc., to reduce the image to the original image size, as long as the detail in the image is clear. Because the detection method provided by the embodiment is not influenced by the size of the image, the reduced device image can be directly detected, the step of intercepting the picture of the indicator lamp area is avoided, and meanwhile, the image calculation amount is reduced due to the reduction of the size of the image, and the detection speed is favorably accelerated.
In operation S203, an image of a region to be detected including the object to be detected in the image is obtained, where an area of the image of the region to be detected is smaller than an area of the image.
Specifically, two color spaces may be used: and screening the original image in three threshold ranges of the RGB color space and the HLS color space to obtain the region to be detected.
For example, a region composed of pixels, the values of which are all smaller than a first brightness threshold value, and the colors of which are consistent with preset colors, in the HLS color space is used as the first region to be detected. And taking the area formed by the pixels of which the brightness parameters in the HLS color space are all larger than the second brightness threshold value as a second area to be detected. And taking the first region to be detected and the second region to be detected together as a region to be detected. Therefore, whether each pixel is the pixel of the object to be detected can be determined based on the value of the brightness parameter and the value of the color parameter of each pixel in the image, and the region to be detected can be determined based on the pixel.
For another example, a plurality of sub-regions included in the image may be screened to obtain the region to be detected. For example, the image is divided into n × m sub-regions, and color space conversion is performed on each sub-region to obtain a luminance parameter value, a color parameter value, and the like of each sub-region. Then, whether the object to be detected is included in each sub-region is judged based on the value of the brightness parameter and the value of the color parameter of each sub-region. If so, taking the image of the sub-area as the image of the area to be detected. Wherein n and m may be equal or different, and n and m are positive integers. For example, the size of the area of the sub-region is close to the size of the object to be detected, and the effect is better.
In operation S205, connected domain analysis and shape analysis are performed on the image of the region to be detected, and an object position of the object to be detected in the image is determined.
In this embodiment, the image of the region to be detected may be converted into a binary image, so that a connected component in the binary image may be obtained, where the connected component may refer to a region formed by a plurality of adjacent pixels having the same value. Then, according to whether the outline of the connected domain is the same as a preset shape (such as a circle which is solid and does not overlap), if so, the position of the connected domain can be determined as the object position of the object to be detected.
In operation S207, object state information at the object position is determined and output.
In this embodiment, since the image and the connected component have a positional relationship, it is possible to determine which coordinate of the connected component (e.g., the center point of the connected component or a plurality of points of the connected component located on the outline) is located on the image, e.g., a relative coordinate of the center of the connected component with respect to a corner of the image. In this way, status information of the image at the location of the object, such as values of a brightness parameter, values of a color parameter of the pixels, can be determined, facilitating the determination of the status information of the object to be tested on the basis thereof.
According to the detection method provided by the disclosure, the image of the region to be detected including the object to be detected in the image is acquired, and then the connected domain analysis and the shape analysis are performed, so that the influence factors of different electronic equipment in different styles, interference of different light rays and the like can be overcome, the object position of the object to be detected in the image can be accurately determined, and further the object state information of the object position in the image of various electronic equipment can be output.
The method shown in fig. 2 will be further described with reference to fig. 3A to 3E.
Fig. 3A schematically shows a flowchart for acquiring an image of a region to be detected including the object to be detected in an image according to an embodiment of the present disclosure.
As shown in fig. 3A, the acquiring the image of the region to be detected including the object to be detected in the image may include operations S301 to S307.
In operation S301, the image is converted into a first color space to obtain a brightness parameter of a pixel in the image.
In operation S303, a first region image in which the values of the luminance parameters of the pixels in the image are all smaller than a first luminance threshold is acquired.
In operation S305, the first region image is converted into a second color space to obtain color parameters of pixels in the first region image.
In operation S307, a second region image including a designated color parameter is acquired from the first region image based on a value of the color parameter of the first region image.
Through the above operations, an image of an object with a small value of the brightness parameter can be preliminarily screened from the image. In actual use, one image may include both the object to be detected with a smaller value of the brightness parameter and the object to be detected with a larger value of the brightness parameter, which cannot be omitted, and the object to be detected with the larger value of the brightness parameter may be obtained through operation S309.
In operation S309, after converting the image into the first color space, a third area image in which the value of the luminance parameter of the pixel in the image is greater than a second luminance threshold value, which is greater than the first luminance threshold value, is also acquired.
In addition, the second region image and the third region image may be set as the region image to be detected. Through the operation, the object to be detected with a smaller brightness parameter value and the object to be detected with a larger brightness parameter value in one image can be preliminarily identified.
In a specific embodiment, the object to be detected is a status indicator light which is not lighted. The unlighted status indicator lamp firstly suppresses a bright area through the HLS color space to obtain a binary mask image of the HLS color space, and the main purpose is to set the bright area to be black and not detect the bright area. Since the brightness of the environment (including the device image and the room background image) in the image is generally similar to the brightness of the status indicator lights which are not lighted, the approximate area of the status indicator lights which are not lighted can not be screened out only through the brightness channel, otherwise, many image areas which are not the object to be detected can be screened out. In consideration of the color characteristics of the status indicator lamps, the RGB channels corresponding to different color lamps can be screened in the RGB color space through a preset value range to obtain a binary mask map, so as to further screen out the positions of the status indicator lamps which are not lighted. For example, the Red (Red, R for short) value of the Red status indicator lamp is relatively large, so that the R channel threshold value is increased, and the other channel threshold values are reduced to obtain a binary mask image of the RGB color space, where the Red R channel value in the captured image is greater than the threshold value, and the Green (G) channel value and the Blue (B) channel value are smaller than a certain threshold value, the corresponding mask image is white, and the remaining areas are black, so that the binary mask image of the RGB color space for Red can be obtained. Because the shooting environment includes dim condition and light filling condition, the corresponding RGB channel value range under two conditions is different, for example, the range of R value of red light under two conditions is different, consequently can obtain two RGB's binary mask image. By performing an or operation on the binary mask images of the two RGB color spaces and then performing an and operation on the binary mask image of the HLS color space and the original image, it is possible to retain the region of the original image that is white in common in the binary mask image of the RGB color space and the binary mask image of the HLS color space, while the other regions of the original image are black, and screen out the approximate positions of the red unlit status indicators. If other multiple color status indicators are present, the above process may be repeated to obtain the approximate locations of the other multiple color status indicators, respectively.
In another embodiment, a state indicator lamp in an illuminated state is taken as an example for explanation. The image is converted to the HLS color space. The lighting state indicator lamp is mainly filtered through a brightness (L) threshold value in an HLS color space, and pixels of the image with the brightness parameter values larger than the brightness threshold value are obtained and serve as a binary mask image of the HLS color space, wherein the binary image of the pixels with the brightness parameter values larger than the brightness threshold value is white, and the rest pixels are black. When the binary mask image in the HLS color space and the original image are subjected to and operation, the original image remains as the original image after the and operation is performed on the white portion in the binary mask image in the HLS color space and the original image, and the original image becomes black after the and operation is performed on the black portion in the binary mask image in the HLS color space and the original image. Therefore, the pixels with the brightness parameter value larger than the brightness threshold value in the original image can be screened out, and because the brightness of the lighted state indicator lamp is greatly different from the brightness of the surrounding environment, the approximate area of the object to be detected can be screened out through the brightness threshold value, for example, the area where the pixels with the brightness parameter value larger than the brightness threshold value in the original image are located can be used as the area to be detected.
Fig. 3B schematically shows a schematic diagram of acquiring an image of a region to be detected according to an embodiment of the present disclosure.
As shown in fig. 3B, the left image is a photographed image of the XX electronic device, in which an object to be detected is present: three status indicators, but the image does not calibrate the location information of the status indicators. In addition, the image also includes an electronic device housing image, a display screen image, a circle-shaped texture, an oval texture, a lattice-filled circular texture, and the like. After going through a plurality of operations as shown in fig. 3A, the image of the left image may be converted into the image of the right image. The elliptical texture 31, the circular texture 34 and the check filled circular texture 35 are retained because the value of the luminance parameter of the pixel is smaller than the first luminance threshold (e.g., within the unlit luminance range), and the value of the color parameter is the same as or similar to the value of the designated color parameter, and the remaining pixel points are screened out. The image 33 may be retained because the value of the luminance parameter of the pixel is less than the first luminance threshold value and the value of the color parameter is the same as or similar to the value of the designated color parameter, or may be retained because the values of the luminance parameters of the pixels of the image 33 and the display screen image 32 are greater than the second luminance threshold value.
Fig. 3C schematically shows a flowchart for determining an object position of an object to be detected in the image according to an embodiment of the present disclosure.
As shown in fig. 3C, performing connected component analysis and shape analysis on the image of the region to be detected, and determining the object position of the object to be detected in the image may include operations S311 to S321.
In operation S311, the binarization processing is performed on the to-be-detected region image to obtain a binarized to-be-detected region image.
In operation S313, connected domain analysis is performed on the binarized to-be-detected region image to obtain a connected domain.
For example, the image of the region to be detected may be binarized to be a black-and-white image, or the original image may be binarized to obtain a binarized image of the region to be detected. And then, analyzing the connected domain to obtain the connected domain in the image. In addition to the connected domain of the status indicator lamp as the object to be detected, there may be other region connected domains screened out in the image, such as the connected domains of the oval texture 31, the display screen image 32, the circular texture 34 and the lattice filling circular texture 35 in fig. 3B.
In operation S315, edge detection is performed on the connected component to obtain an edge of the connected component.
For example, the edge detection is performed on the image obtained after the connected component analysis to obtain the edge of the connected component in the image, which facilitates the detection of the circle in the image.
In operation S317, a connected component to be selected having an edge of a preset shape is acquired.
In operation S319, the object to be detected is determined from the candidate connected domains based on the connected domain size, the distance between the connected domains, and the connected domain filling state.
In this embodiment, the connected domains can be screened by limiting the size and shape of the connected domains, and the connected domains with areas larger than a maximum threshold or areas smaller than a minimum threshold or with larger length-width ratios are filtered to obtain the connected domains of the object to be detected.
For example, since the shape of the status indicator light is a known shape, such as a circle or a rectangle, the shape of the image after the edge detection can be detected, and the connected component that meets the requirement in the image can be obtained by limiting the size of the shape, the shortest distance between the centers of the two images, the number of the status indicator lights, and the like. Because the status indicator lamp in the lit status and the status indicator lamp that is not lit may exist simultaneously in one electronic device, that is, images of objects to be detected in different statuses exist in the images, because the edge brightness of the status indicator lamp in the lit status is darker and the color of the status indicator lamp is similar to the color of the status indicator lamp that is not lit, when the status indicator lamp that is not lit is detected, the edge of the status indicator lamp in the lit status may be detected, and because the outline shape is the same, there may be a case that the edge area of the status indicator lamp in the lit status is falsely detected as the status indicator lamp that is not lit, after the shape detection passes, the filling status of the image in the connected domain may be further determined, for example, whether the image is a solid image, and the false detection condition is screened.
Fig. 3D schematically shows a schematic diagram of determining an object to be detected according to an embodiment of the present disclosure.
As shown in fig. 3D, through the above operations, the image shown in the left diagram of fig. 3D may be processed into the image shown in the right diagram of fig. 3D, where the elliptical texture is filtered out without conforming to the preset shape, the display screen image is filtered out without conforming to the preset shape (if the image displayed in the display screen conforms to the preset shape, the display image conforming to the preset shape may be retained), the circle-shaped texture and the lattice-filled circle-shaped texture are filtered out without conforming to the filling state, and the remaining three images are images of the object to be detected.
In operation S321, a position of the object to be detected in the image is acquired. The position of the object to be detected in the image can be determined according to the relative coordinates of each pixel in the image.
Fig. 3E schematically illustrates a schematic diagram of determining a position of an object according to an embodiment of the present disclosure.
As shown in fig. 3E, the left image is an image in which only the object to be detected remains, the lower left corner of the image of the right image is set as the origin of coordinates, the coordinates thereof can be represented as (0,0), and the center coordinates of the image of the object to be detected 1 can be represented as (x1, y 1). Of course, it can also be expressed as (x1, y1, R1), where x1 is x coordinate value, y1 is y coordinate value, and R1 is radius value of the circular object to be detected (not shown). Further, the image of the object 1 to be detected may be represented by a region, for example, the coordinate ranges of two diagonal vertices of the image of the object 1 to be detected may be represented as (xa, ya), (xb, yb), not shown. Similarly, the coordinates of the image of the object 2 to be detected may be represented as (x2, y2), and the coordinates of the image of the object 3 to be detected may be represented as (x3, y 3).
In another embodiment, the determining and outputting the object state information at the object position may include the following operations.
Firstly, the objects to be detected are sorted based on the coordinates of their positions in the image.
Then, the state information at the coordinates of the object position in the image is sequentially output.
Referring to fig. 3E, if the objects to be detected at the coordinates (x1, y1) are ranked in order from small to large, the objects to be detected at the coordinates (x2, y2) may be ranked as the objects to be detected 1, the objects to be detected at the coordinates (x2, y2) may be ranked as the objects to be detected 2, and the objects to be detected at the coordinates (x3, y3) may be ranked as the objects to be detected 3. The state information at the coordinates of the object position in the image may then be sequentially output. For example, the object 1 to be detected (red, lit), the object 2 to be detected (green, lit), and the object 3 to be detected (yellow, off).
In another embodiment, the method may further include the following operations.
Firstly, the normal state information of the object of the electronic equipment in the normal working state is obtained.
Then, the operating state of the electronic device is determined based on the normal state information of the object and the state information at the coordinates of the object position in the image.
For example, as shown in the left diagram of fig. 3B, the three status indicators are a power indicator, a running indicator and a dump indicator, respectively, if the power indicator and the running indicator are normally turned on, it indicates that the electronic device is in a normal operating state, if the power indicator and the dump indicator are turned on, it indicates that the electronic device is in a dump state, if the power indicator is turned off and any one of the other status indicators is turned on, it indicates that the electronic device is abnormal, and if all the three status indicators are turned off, the electronic device is in a power-off state. The information of the operating states of the status indicator lights and the electronic equipment can be calibrated in advance, and the corresponding relations between the status indicator lights of different electronic equipment and the operating states of the electronic equipment are different.
On the basis of obtaining the normal state information of the object in the normal operation state of the electronic device, the operation state of the electronic device can be determined based on the normal state information of the object and the state information at the coordinates of the position of the object in the image.
In another embodiment, the method may further comprise the operations of: and performing smoothing processing on the image of the second area and/or the image of the third area. Because the color space screening is used for obtaining that the region to be detected has some rough edges and small noise points, the 'open operation' is firstly carried out, and then the 'close operation' is carried out, so that the outline of the object in the image is smoothed, and the small noise points are removed.
The 'opening operation' is to perform corrosion operation first and then perform expansion operation to segment the slightly connected objects, can remove isolated dots, burrs and bridges, and basically keeps the positions and shapes of all parts in the image, and is to filter the image based on geometric operation.
The 'closing operation' is to perform expansion operation first and then perform corrosion operation to connect objects which are connected together in a fine mode, the 'closing operation' can fill and level small holes and close small cracks, the positions and the shapes of all parts in an image are unchanged, and the 'closing operation' filters the image by filling concave angles of the image.
Fig. 4 schematically shows a block diagram of a detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, another aspect of the present disclosure provides a detection apparatus 400, and the detection apparatus 400 may include an image acquisition module 410, an area image acquisition module 430, an analysis module 450, and an output module 470.
The image obtaining module 410 is configured to obtain an image, where the image includes an object to be detected.
The region image acquiring module 430 is configured to acquire a region image to be detected including the object to be detected in the image, where an area of the region image to be detected is smaller than an area of the image.
The analysis module 450 is configured to perform connected domain analysis and shape analysis on the image of the region to be detected, and determine an object position of the object to be detected in the image.
The output module 470 is used to determine and output the object state information at the object position.
In one embodiment, the region image acquiring module 430 may include a first converting unit, a first region acquiring unit, a second converting unit, and a second region acquiring unit.
For example, the first conversion unit is configured to convert the image into a first color space to obtain a luminance parameter of a pixel in the image.
The first area acquisition unit is used for acquiring a first area image of which the brightness parameters of the pixels in the image are all smaller than a first brightness threshold value.
The second conversion unit converts the first area image into a second color space to acquire color parameters of pixels in the first area image.
The second region acquisition unit is configured to acquire a second region image including a specified color parameter from the first region image based on a value of a color parameter of the first region image.
In addition, the region image acquiring module 430 may further include a third region acquiring unit and a to-be-detected region image acquiring unit.
The third area obtaining unit is configured to obtain a third area image in which a value of a luminance parameter of a pixel in the image is greater than a second luminance threshold value after the image is converted into the first color space, where the second luminance threshold value is greater than the first luminance threshold value.
The to-be-detected region image acquiring unit is configured to use the second region image and the third region image as the to-be-detected region image.
In order to remove burrs, pinholes and the like in the image, the region image acquisition module may further include a smoothing unit, wherein the smoothing unit is configured to smooth the image of the second region and/or the image of the third region.
In another embodiment, the analysis module 450 may include a binarization processing unit, a connected component analysis unit, an edge detection unit, a connected component to be selected acquisition unit, an object to be detected determination unit, and a position acquisition unit.
And the binarization processing unit is used for carrying out binarization processing on the image of the area to be detected to obtain a binarized image of the area to be detected.
And the connected domain analysis unit is used for carrying out connected domain analysis on the binarized to-be-detected region image to obtain a connected domain.
The edge detection unit is used for carrying out edge detection on the connected domain to obtain the edge of the connected domain.
The candidate connected domain acquiring unit is used for acquiring a candidate connected domain with an edge in a preset shape.
The object to be detected determining unit is used for determining the object to be detected from the connected domain to be selected based on the size of the connected domain, the distance between the connected domains and the filling state of the connected domain.
The position acquisition unit is used for acquiring the position of the object to be detected in the image.
In another embodiment, the output module 470 may include a sorting unit and an output unit.
The sorting unit is used for sorting the objects based on the coordinates of the object positions of the objects to be detected in the image.
The output unit is used for sequentially outputting the state information at the coordinates of the object positions in the image.
In other embodiments, the output module 470 may further include a device status information obtaining unit and a device status determining unit.
The device state information acquisition unit is used for acquiring the normal state information of the object of the electronic device in the normal working state.
The device state determination unit is configured to determine an operating state of the electronic device based on the normal state information of the object and the state information at the coordinates of the position of the object in the image.
In order to reduce the computational resources occupied by image processing, the apparatus 400 may further comprise an image reduction module for reducing the size of the image after the image is acquired.
Any of the modules, units, or at least part of the functionality of any of them according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules and units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, units according to the embodiments of the present disclosure may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by any other reasonable means of hardware or firmware by integrating or packaging the circuits, or in any one of three implementations of software, hardware and firmware, or in any suitable combination of any of them. Alternatively, one or more of the modules, units according to embodiments of the present disclosure may be implemented at least partly as computer program modules, which, when executed, may perform the respective functions.
For example, any of the image acquisition module 410, the region image acquisition module 430, the analysis module 450, and the output module 470 may be combined in one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the image acquisition module 410, the area image acquisition module 430, the analysis module 450, and the output module 470 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in any suitable combination of any of them. Alternatively, at least one of the image acquisition module 410, the area image acquisition module 430, the analysis module 450 and the output module 470 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
Fig. 5 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, an electronic device 500 according to an embodiment of the present disclosure includes a processor 501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the system 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, system 500 may also include an input/output (I/O) interface 505, input/output (I/O) interface 505 also being connected to bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program, when executed by the processor 501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (8)

1. A method of detection, comprising:
acquiring an image, wherein the image comprises an object to be detected;
acquiring a to-be-detected region image including the to-be-detected object in the image, wherein the area of the to-be-detected region image is smaller than that of the image;
performing connected domain analysis and shape analysis on the image of the area to be detected, and determining the object position of the object to be detected in the image; and
determining and outputting object state information at the object position;
wherein the acquiring the image of the region to be detected including the object to be detected comprises:
converting the image into a first color space to obtain a brightness parameter of a pixel in the image;
acquiring a first area image of which the brightness parameter values of pixels in the image are all smaller than a first brightness threshold;
converting the first area image into a second color space to obtain color parameters of pixels in the first area image; and
acquiring a second area image including a specified color parameter from the first area image based on a value of the color parameter of the first area image;
further comprising:
after the image is converted into the first color space, a third area image of which the value of the brightness parameter of the pixel in the image is larger than a second brightness threshold value is also obtained, wherein the second brightness threshold value is larger than the first brightness threshold value; and
and taking the second area image and the third area image as the area image to be detected.
2. The method according to claim 1, wherein the performing connected component analysis and shape analysis on the image of the region to be detected and determining the object position of the object to be detected in the image comprises:
carrying out binarization processing on the to-be-detected region image to obtain a binarized to-be-detected region image;
performing connected domain analysis on the binarized to-be-detected region image to obtain a connected domain;
performing edge detection on the connected domain to obtain the edge of the connected domain;
acquiring a connected domain to be selected with an edge of a preset shape;
determining the object to be detected from the connected domain to be selected based on the size of the connected domain, the distance between the connected domains and the filling state of the connected domain; and
and acquiring the position of the object to be detected in the image.
3. The method of claim 1, wherein the determining and outputting object state information at the object location comprises:
sorting the objects to be detected based on the coordinates of the object positions of the objects in the image; and
sequentially outputting state information at coordinates of object positions in the image.
4. The method of claim 3, further comprising:
acquiring normal state information of an object of the electronic equipment in a normal working state;
determining an operating state of the electronic device based on the normal state information of the object and the state information at the coordinates of the object position in the image.
5. A detection device, comprising:
the image acquisition module is used for acquiring an image, and the image comprises an object to be detected;
the region image acquisition module is used for acquiring a region image to be detected of the object to be detected in the image, wherein the area of the region image to be detected is smaller than that of the image;
the analysis module is used for carrying out connected domain analysis and shape analysis on the image of the area to be detected and determining the object position of the object to be detected in the image; and
the output module is used for determining and outputting the object state information at the object position;
wherein the region image acquiring module includes:
the first conversion unit is used for converting the image into a first color space so as to obtain the brightness parameter of the pixel in the image;
the first area acquisition unit is used for acquiring a first area image of which the brightness parameters of pixels in the image are all smaller than a first brightness threshold;
the second conversion unit is used for converting the first area image into a second color space so as to obtain color parameters of pixels in the first area image; and
a second region acquisition unit configured to acquire a second region image including a specified color parameter from the first region image based on a value of a color parameter of the first region image;
the region image acquiring module further includes:
a third region acquiring unit, configured to acquire a third region image in which a value of a luminance parameter of a pixel in the image is greater than a second luminance threshold value after converting the image into the first color space, where the second luminance threshold value is greater than the first luminance threshold value; and
and the to-be-detected region image acquisition unit is used for taking the second region image and the third region image as the to-be-detected region image.
6. The apparatus of claim 5, wherein the analysis module comprises:
a binarization processing unit, configured to perform binarization processing on the to-be-detected region image to obtain a binarized to-be-detected region image;
the connected domain analysis unit is used for carrying out connected domain analysis on the binarized to-be-detected region image to obtain a connected domain;
an edge detection unit, configured to perform edge detection on the connected domain to obtain an edge of the connected domain;
a candidate connected domain obtaining unit, configured to obtain a candidate connected domain with an edge having a preset shape;
the object to be detected determining unit is used for determining the object to be detected from the connected domain to be selected based on the size of the connected domain, the distance between the connected domains and the filling state of the connected domain; and
and the position acquisition unit is used for acquiring the position of the object to be detected in the image.
7. An electronic device, comprising:
one or more processors;
storage means for storing executable instructions which, when executed by the processor, implement the method of any one of claims 1 to 4.
8. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, implement a method according to any one of claims 1 to 4.
CN201910636959.7A 2019-07-15 2019-07-15 Detection method, detection device, electronic apparatus, and medium Active CN110335273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910636959.7A CN110335273B (en) 2019-07-15 2019-07-15 Detection method, detection device, electronic apparatus, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910636959.7A CN110335273B (en) 2019-07-15 2019-07-15 Detection method, detection device, electronic apparatus, and medium

Publications (2)

Publication Number Publication Date
CN110335273A CN110335273A (en) 2019-10-15
CN110335273B true CN110335273B (en) 2021-03-05

Family

ID=68144843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910636959.7A Active CN110335273B (en) 2019-07-15 2019-07-15 Detection method, detection device, electronic apparatus, and medium

Country Status (1)

Country Link
CN (1) CN110335273B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325181B (en) * 2020-03-19 2023-12-05 京东科技信息技术有限公司 State monitoring method and device, electronic equipment and storage medium
CN113763402A (en) * 2020-06-04 2021-12-07 Oppo(重庆)智能科技有限公司 Detection method, detection device, electronic equipment and storage medium
CN111862195B (en) * 2020-08-26 2024-04-09 Oppo广东移动通信有限公司 Light spot detection method and device, terminal and storage medium
CN112150438B (en) * 2020-09-23 2023-01-20 创新奇智(青岛)科技有限公司 Disconnection detection method, disconnection detection device, electronic device and storage medium
CN112862800B (en) * 2021-02-25 2023-01-24 歌尔科技有限公司 Defect detection method and device and electronic equipment
CN114511610A (en) * 2022-01-06 2022-05-17 自然资源部第三海洋研究所 Method, device and storage medium for determining size of object to be measured in image
CN115631160A (en) * 2022-10-19 2023-01-20 武汉海微科技有限公司 LED lamp fault detection method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568242A (en) * 2012-01-17 2012-07-11 杭州海康威视系统技术有限公司 Signal lamp state detection method and system based on video processing
CN103488987A (en) * 2013-10-15 2014-01-01 浙江宇视科技有限公司 Video-based method and device for detecting traffic lights
CN109598244A (en) * 2018-12-07 2019-04-09 吉林大学 A kind of traffic lights identifying system and its recognition methods

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9070305B1 (en) * 2010-01-22 2015-06-30 Google Inc. Traffic light detecting system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568242A (en) * 2012-01-17 2012-07-11 杭州海康威视系统技术有限公司 Signal lamp state detection method and system based on video processing
CN103488987A (en) * 2013-10-15 2014-01-01 浙江宇视科技有限公司 Video-based method and device for detecting traffic lights
CN109598244A (en) * 2018-12-07 2019-04-09 吉林大学 A kind of traffic lights identifying system and its recognition methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification;Jian HUANG et al.;《Frontiers of Mechanical Engineering》;20161231;第11卷(第3期);第311-315页 *

Also Published As

Publication number Publication date
CN110335273A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110335273B (en) Detection method, detection device, electronic apparatus, and medium
US11587219B2 (en) Method and apparatus for detecting pixel defect of optical module, and device
US10559081B2 (en) Method and system for automated visual analysis of a dipstick using standard user equipment
US8577170B2 (en) Shadow detection in a single image
CN113781396B (en) Screen defect detection method, device, equipment and storage medium
CN105069453A (en) Image correction method and apparatus
CN116503388B (en) Defect detection method, device and storage medium
CN111145138B (en) Detection method, device and equipment for LED lamp panel and storage medium
CN115330789A (en) Screen defect detection method, device, equipment and readable storage medium
CN113785181A (en) OLED screen point defect judgment method and device, storage medium and electronic equipment
CN114913109A (en) Image anomaly detection method and device, test chart and terminal equipment
CN113630594B (en) Bad point detection system of display panel
CN111915601B (en) Abnormality test method, device and system for intelligent terminal
JP7342616B2 (en) Image processing system, setting method and program
JP2008014842A (en) Method and apparatus for detecting stain defects
CN114155179A (en) Light source defect detection method, device, equipment and storage medium
KR20140082333A (en) Method and apparatus of inspecting mura of flat display
CN115705807A (en) Method, device, equipment and medium for testing low-gray display effect of display screen
CN114677649A (en) Image recognition method, apparatus, device and medium
CN110288662B (en) Display detection method and system
KR101383827B1 (en) System and method for automatic extraction of soldering regions in pcb
JP2020064019A (en) Measurement data collection device and program
WO2020107196A1 (en) Photographing quality evaluation method and apparatus for photographing apparatus, and terminal device
KR20190075283A (en) System and Method for detecting Metallic Particles
CN114299854B (en) LED display screen adjusting system, method, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.