CN112116551A - Camera shielding detection method and device, electronic equipment and storage medium - Google Patents

Camera shielding detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112116551A
CN112116551A CN201910537620.1A CN201910537620A CN112116551A CN 112116551 A CN112116551 A CN 112116551A CN 201910537620 A CN201910537620 A CN 201910537620A CN 112116551 A CN112116551 A CN 112116551A
Authority
CN
China
Prior art keywords
picture
color
area
ratio
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910537620.1A
Other languages
Chinese (zh)
Inventor
林泽雄
黄晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910537620.1A priority Critical patent/CN112116551A/en
Publication of CN112116551A publication Critical patent/CN112116551A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a camera shielding detection method, a camera shielding detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: and dividing the picture into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color, wherein the target area is associated with the preset color, and the picture is acquired through a camera. Determining the ratio of the area of the target area to the area of the picture; if the ratio is larger than the preset ratio, it is determined that the camera is shielded at the moment of acquiring the picture, and the color of the object shielding the camera is matched with the preset color. Therefore, the manpower resource can be saved.

Description

Camera shielding detection method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet communication technologies, and in particular, to a method and an apparatus for detecting camera occlusion, an electronic device, and a storage medium.
Background
With the rapid development of computer technology, informatization plays an increasingly important role in human social life. When a user uses a terminal (a desktop computer, a notebook computer, a mobile phone and a tablet computer), the user can take a picture by using a camera or record a video by using the camera.
However, in the process of taking a picture or recording a video by a user, a certain part of the body of the user may block the camera, which may cause the picture taken to be not in accordance with the requirement of the user or a certain segment of the recorded video to be not in accordance with the requirement of the user, and finally cause the user to need to repeat work. Wasting manpower.
The embodiment of the application provides a camera shielding detection method and device, electronic equipment and a storage medium, and can save human resources.
Disclosure of Invention
The embodiment of the application provides a camera shielding detection method and device, electronic equipment and a storage medium, and human resources can be saved.
In one aspect, an embodiment of the present application provides a method for detecting camera occlusion, where the method includes:
dividing the picture into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color; the target area is associated with a preset color, and the picture is acquired through a camera;
determining the ratio of the area of the target area to the area of the picture;
if the ratio is larger than the preset ratio, it is determined that the camera is shielded at the moment of acquiring the picture, and the color of the object shielding the camera is matched with the preset color.
Another aspect provides a camera occlusion detection device, the device comprising:
the area distinguishing module is used for dividing the picture into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color; the target area is associated with a preset color, and the picture is acquired through a camera;
the ratio determining module is used for determining the ratio of the area of the target area to the area of the picture;
and the judging module is used for determining that the camera is blocked at the moment of acquiring the picture and the color of the object for blocking the camera is matched with the preset color if the ratio is greater than the preset ratio.
Another aspect provides an electronic device, wherein the electronic device includes a processor and a memory, and the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the camera occlusion detection method as described above.
Another aspect provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the camera occlusion detection method as described above.
The camera shielding detection method, the camera shielding detection device, the electronic equipment and the storage medium have the following technical effects:
and dividing the picture into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color, wherein the target area is associated with the preset color, and the picture is acquired through a camera. Determining the ratio of the area of the target area to the area of the picture; if the ratio is larger than the preset ratio, it is determined that the camera is shielded at the moment of acquiring the picture, and the color of the object shielding the camera is matched with the preset color. Therefore, the manpower resource can be saved.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a camera occlusion detection method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for dividing a picture into a target area and a non-target area according to an embodiment of the present disclosure;
FIG. 4 is a photograph provided by an embodiment of the present application;
FIG. 5 is a photograph provided by an embodiment of the present application;
FIG. 6 is a photograph provided by an embodiment of the present application;
fig. 7 is a flowchart illustrating a method for determining a ratio of an area of a target region to an area of a picture according to an embodiment of the present application;
fig. 8 is a schematic diagram of a camera occlusion detection display interface according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a camera occlusion detection device according to an embodiment of the present application;
fig. 10 is a block diagram of a hardware structure of a server in a camera occlusion detection method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the embodiment of the application, one optional application scene is live video. Specifically, the video anchor can be live through an application program on the terminal, and the user can watch the video through the application program on the terminal. Between the terminal of the video anchor and the terminal of the user, there is a server that provides services for the application. After the server establishes communication connection with the terminals of both parties, the server can receive the video uploaded by the video anchor and forward the video to the user terminal. The server may also display the video on a display screen of the facilitator providing the application, facilitating supervision by the facilitator.
The video anchor may have a camera sheltered in the live broadcasting process, for example, the anchor unconsciously shelters the camera with a hand, and this situation may cause a result that the user viewing experience is not good. In order to solve the above problems, some manpower is required to be provided by a service provider to monitor the video, so as to find the video blocked by the camera and remind the video anchor. However, manpower is limited after all, and the video of one video platform is massive, and all the videos cannot be monitored at the same time. The embodiment of the present application is an alternative way to solve the above problems in the live video broadcast.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment according to an embodiment of the present application, including a server 101 and a terminal 102. The terminal 102 may be a desktop computer, a notebook computer, a mobile phone, a tablet computer, or the like, which may be loaded with a video application. In the embodiment of the present application, the server 101 and the terminal 102 may be connected by a wireless link.
In an alternative embodiment, the server may be a main body for implementing the camera occlusion detection method. After the video anchor uploads the video to the server, the server intercepts the picture from the video, the picture is divided into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color, the ratio of the area of the target area to the area of the picture is determined, whether the ratio is larger than the preset ratio is judged, and whether the camera is shielded at the moment of obtaining the picture is determined through a judgment conclusion.
In another alternative implementation, the terminal may be a main body that implements the camera occlusion detection method. The method comprises the steps that a terminal directly intercepts a picture from a video, the picture is divided into a target area and a non-target area according to a threshold value corresponding to feature information of a preset color, the ratio of the area of the target area to the area of the picture is determined, whether the ratio is larger than the preset ratio or not is judged, and whether the camera is shielded at the moment of obtaining the picture or not is determined through a judgment conclusion. In the above method, the program implementing the method may be regarded as a component updated to the terminal.
If the server determines that the camera is blocked at the moment of acquiring the picture, prompt information can be sent to the terminal to prompt the anchor camera to block the picture. If the terminal determines that the camera is blocked at the moment of acquiring the picture, a prompt message can be sent for prompting that the anchor camera is blocked. Both the above two ways are optional implementations, and the following will explain the embodiments of the present application with a server as an execution subject.
A specific embodiment of a camera occlusion detection method according to the present application is described below, and fig. 2 is a schematic flow chart of a camera occlusion detection method according to the embodiment of the present application, and the present specification provides method operation steps according to the embodiment or the flow chart, but more or fewer operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product when executed may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment) according to the methods described in the embodiments or figures. Specifically, as shown in fig. 2, the method may include:
s201: dividing the picture into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color; the target area is associated with a preset color, and the picture is acquired through a camera.
In the embodiment of the present application, the picture is a picture in a video, and optionally, the picture is acquired by a terminal through a camera on the terminal and uploaded to a server.
In the embodiment of the present application, the pictures may be in different formats, such as RGB format, HSV format, HSL format, and the like. Based on pictures with different formats, pixels in the pictures have different characteristic information. For example, each pixel in a picture in RGB format may be represented by red, green and blue, and the combination of different red, green and blue values constitutes a pixel in a picture in different colors, so that the characteristic information of the picture in RGB format is red, green and blue. For example, each pixel in a picture in HSV format can be represented by hue, saturation and brightness, and the combination of different hue values, saturation values and brightness values constitutes a pixel in a picture in different colors, so that the characteristic information of a picture in HSV format is hue, saturation and brightness.
In an optional implementation manner, based on that the format of the picture is an RGB format, an embodiment of the present application describes an optional implementation method for dividing the picture into a target area and a non-target area, fig. 3 is a schematic flow diagram of a method for dividing the picture into the target area and the non-target area, which is provided in the embodiment of the present application, and specifically as shown in fig. 3, the method may include:
s301: judging the content uploaded by the terminal according to the format; if the format of the uploaded content is the video format, the process goes to S303; otherwise, no processing is performed.
In the embodiment of the present application, the video formats include rm, rmvb, mp4, wmv, asf, asx, 3gb, mov, and m4 v. If the format of the content uploaded by the terminal is any one of the formats, the uploaded content is a video.
S303: determining the content uploaded by the terminal as a video, intercepting the video according to a preset time period, and acquiring a picture.
In an optional implementation manner in which the server obtains the picture, the server obtains the video, and may intercept the video according to a preset time period to obtain the picture. Specifically, after the communication connection is established between the terminal and the server, the video can be uploaded to the server in real time in the process of anchor live broadcasting. After receiving the video in real time, the server performs framing processing, that is, picture capturing can be performed from the video according to a preset time period. For example, the server may capture a picture from the video every 2 seconds as a basis for subsequently determining whether the camera is occluded.
S305: judging the format of the picture; if the picture format is HSV, go to S309; if the format of the picture is not HSV format, go to S307.
S307: and converting the non-HSV format picture into the HSV format picture.
Assuming that the intercepted picture is in an RGB format, the RGB format picture can be converted into an HSV format picture through format conversion.
In an alternative embodiment, the RGB format may be converted to HSV format using the cv2.cvtcolor function. Specifically, the HSV _ frame is cv2.cvtcolor (frame, COLOR _ BGR2HSV), where HSV _ frame is an output HSV format picture, frame is an input RGB format picture, and COLOR _ BGR2HSV is a conversion mode, and is converted from an RGB format to an HSV format.
S309: and determining an upper threshold value and a lower threshold value corresponding to the characteristic information of the preset color.
In this embodiment of the application, the server does not obtain the color included in the picture after obtaining the picture, and thus, the preset color may be any color, assuming that the preset color is orange, an upper threshold corresponding to the characteristic information of the orange is [25, 225, 225], and a lower threshold is [11,43,46], which indicates that: the highest value of orange hue is 25, the highest value of saturation is 225, and the highest value of lightness is 225. The lowest value of orange hue is 11, the lowest value of saturation is 43, and the lowest value of lightness is 46. The upper threshold value and the lower threshold value of the preset color can be obtained in a form of table lookup, for example, the upper threshold value and the lower threshold value of each color are stored in a storage area, and after the preset color is determined, the upper threshold value and the lower threshold value of the preset color are obtained from the storage area.
S311: and determining a threshold interval corresponding to the preset color according to the upper threshold value and the lower threshold value.
Based on the fact that the preset color is orange, the further explanation is that determining the threshold interval corresponding to orange includes: the hue interval is 11-25, the saturation interval is 43-225, and the brightness interval is 46-225.
S313: judging whether a characteristic value corresponding to characteristic information of pixel points contained in the picture is located in a threshold interval or not; if yes, go to S315; otherwise, go to S319.
In the embodiment of the present application, it may be determined whether a feature value corresponding to each feature information of each pixel point in a picture is located in a threshold interval, and if the hue, saturation, and brightness of a first pixel point of the picture are [23, 80, 80], each feature value of the pixel point is located in a threshold interval corresponding to orange; assuming that the hue, saturation and brightness of the second pixel point of the picture are [23, 20, 125], the saturation 20 of the pixel point is not located in the threshold interval of the saturation corresponding to orange.
S315: and changing the color of the pixel point into the first color.
In the embodiment of the present application, the first color may be white. Continuing to explain based on the above example, the characteristic values of the hue, the saturation and the brightness of the first pixel point are all located in the threshold interval corresponding to orange, the color of the first pixel point is orange, and the color of the orange first pixel point is changed into white.
S317: and determining the area where the pixel point of the first color is located as a target area.
In the embodiment of the application, the region where the white-colored pixel point is located is determined as the target region, that is, the colors of all the pixels in the target region are white.
S319: and changing the color of the pixel point into a second color.
In an alternative embodiment, the second color is black. Continuing to explain based on the above example, the characteristic value of the saturation in the hue, the saturation and the lightness of the second pixel point is not located in the threshold interval corresponding to the orange saturation, the color of the second pixel point is not orange, and the color of the second pixel point is changed into black.
S321: and determining the area where the pixel point of the second color is located as a non-target area.
In the embodiment of the application, the area where the black pixel point is located is determined as the non-target area, that is, the colors of all pixels in the non-target area are black.
Fig. 4 is a picture provided by an embodiment of the present application, where the picture is a picture of a face covered by a finger, 401 is a finger portion, 402 is a non-finger portion, a color of the picture has undergone gray level conversion, and in an actual application process, an original picture without gray level processing is directly divided into a target area and a non-target area.
In the embodiment of the present application, the cv2.inrange function may be used to divide the picture into the target region and the non-target region. Specifically, mask is cv2.inrange (HSV _ frame, lower, upper), where mask is an output picture only including a target region and a non-target region, HSV _ frame is an input picture in HSV format, lower is a lower threshold of a preset color, and upper is an upper threshold of the preset color. The Lower threshold value of orange is Lower, and the upper threshold value is upper.
In the embodiment of the present application, after the picture shown in fig. 4 is processed by the above-mentioned cv2.inrange function, the picture shown in fig. 5 is presented, fig. 5 is a picture provided in the embodiment of the present application, the preset color is a color corresponding to a finger, in fig. 5, a finger portion 501 is substantially white, and a non-finger portion is substantially black, wherein the non-finger portion further includes a portion of white, because the portion is a forehead portion of a person, and the forehead and the finger have similar colors, and thus the portion is also white after being processed. In addition, the finger portion 501 also contains little black, i.e., a circled area 503, which may be a noise point generated during image processing.
In this embodiment of the application, after the step of dividing the picture into the target region and the non-target region, some noise points may appear on the picture, and these noise points may cause an error to a ratio of an area of the target region to an area of the picture, which is determined subsequently, in order to reduce the error, in an optional implementation manner, the picture may be subjected to erosion expansion processing to process the noise points on the picture, and the picture shown in fig. 5 may be presented as the picture shown in fig. 6 after the erosion expansion processing. Fig. 6 is a diagram of a circled area 603 included in the finger portion 601 according to an embodiment of the present application, which has a significant noise reduction compared to 503 in fig. 5. Thus, a more accurate ratio of the area of the target region to the area of the picture can be obtained subsequently.
In this embodiment of the present application, the picture may be dilated by using a cv2. die function. Specifically, mask1 ═ cv2. partition (mask, None, iteration ═ 2), mask1 is the picture to be output, mask is the picture to be input, None identifies the convolution kernel, iteration ═ 2 indicates that the number of iterations is 2.
S203: the ratio of the area of the target region to the area of the picture is determined.
In the embodiment of the present application, there are many methods for determining the ratio of the area of the target region to the area of the picture, and in an alternative implementation, the server may determine the number of pixels included in the target region, and divide the determined number by the total number of pixels of the picture to obtain the ratio.
For example, if the number of white pixels in the target area is 128000, and the picture is 640 × 320, that is, the width of the picture is 640 pixels and the height of the picture is 320 pixels, and the total number is 640 × 320 — 204800, the ratio is 128000/204800 — 62.5%.
In another alternative embodiment, the ratio of the area of the target region to the area of the picture may be determined by determining a contour line of the target region, and fig. 7 is a flowchart of a method for determining the ratio of the area of the target region to the area of the picture provided in this embodiment of the present application, specifically as shown in fig. 7, the method may include:
s701: the contour line of the target area is determined.
In the embodiment of the present application, the contour line of the target region may be determined by using a cv2.findcontours function. Taking an example: constants, hierarchy is cv2.findcontours (mask1_ copy, mode, method), where cv2.findcontours function outputs two values, constants representing the contour itself and hierarchy representing the attributes of the contour, i.e. the structure level. mask1_ copy is a copy of a picture to be input, mode indicates a search mode for an outline, and Method is an approximation Method for the outline.
In the embodiment of the present application, the retrieval mode of the contour includes outer contour retrieval cv2.retr _ exterior, retrieval cv2.retr _ LIST without establishing a hierarchical relationship, retrieval cv2.retr _ CCOMP establishing two levels, and contour retrieval cv2.retr _ TREE of a hierarchical TREE structure.
In the embodiment of the application, the Method comprises the following steps: cv2. chan _ APPROX _ SIMPLE is a compression of elements in the horizontal, vertical, and diagonal directions, and only the end point coordinates in that direction are retained, cv2. chan _ APPROX _ None stores all contour points, and the difference in pixel position between two adjacent points does not exceed 1, that is, max (abs (x1-x2), abs (y2-y1)) ═ 1, and the like.
In this way, the server can obtain the contour of the target area.
S703: and determining the area of the target area according to the pixel points corresponding to the contour lines.
And determining the number of pixel points in the contour line, wherein the format of the pixel points can be approximate number.
S705: and determining the area of the picture according to the pixel points included by the picture.
S707: the ratio of the area of the target region to the area of the picture is determined.
In the embodiment of the present application, a ratio of an area of a target region to an area of a picture, that is, a coverage ratio of a preset color corresponding to the target region on the whole picture, may be calculated by using a cv2.
S205: if the ratio is larger than the preset ratio, it is determined that the camera is shielded at the moment of acquiring the picture, and the color of the object shielding the camera is matched with the preset color.
In the embodiment of the present application, the preset ratio may be set according to an empirical value, or may be set based on collected data, or may be set according to the quality of a picture, or may be set based on collected data and the quality of a picture. Based on the ratio of 62.5%, assuming that the preset ratio is 60%, it can be determined that the camera is blocked at the moment of acquiring the picture, and the color of the object blocking the camera is the preset color or a color close to the preset color.
In the embodiment of the application, the server does not know what color is contained in the picture, so that the ratio corresponding to each preset color on the picture needs to be determined through multiple preset colors, and whether the camera is shielded or not is determined through the ratio when the picture is obtained. The embodiment of the present application can have several preset colors, such as 10, red, orange, yellow, green, cyan, blue, purple, black, white and gray. The lower threshold and the upper threshold for the 10 colors are:
red: lower threshold [156,43,46], upper threshold [180,255,255 ];
orange color: lower threshold [11,43,46], upper threshold [25,255,255 ];
yellow: lower threshold [26,43,46], upper threshold [34,255,255 ];
green: lower threshold [35,43,46], upper threshold [77,255,255 ];
cyan: lower threshold [78,43,46], upper threshold [99,255,255 ];
blue color: lower threshold [100,43,46], upper threshold [124,255,255 ];
purple: lower threshold [125,43,46], upper threshold [155,255,255 ];
black: a lower threshold value [0,0,0], an upper threshold value [180,255,46 ];
white: a lower threshold value [0, 221], an upper threshold value [180,30,255 ];
gray: a lower threshold value [0,0,46], and an upper threshold value [180,43,220 ].
In an optional embodiment, the server may determine, based on each preset color, a ratio corresponding to each preset color on the picture, and determine, by the ratio, whether the camera is blocked when the camera acquires the picture. For example, after the server acquires the picture, it determines for the first time: the method comprises the steps of firstly dividing a picture into a target area and a non-target area based on a threshold value corresponding to red characteristic information, determining the ratio of the area of the target area corresponding to the red to the area of the picture, and if the ratio is larger than a preset ratio, determining that the camera is shielded at the moment of obtaining the picture by a server, wherein the color of an object shielding the camera is matched with the red. And if the ratio is smaller than or equal to the preset ratio, the server performs second determination. And (3) second determination: the method comprises the steps of dividing a picture into a target area and a non-target area based on a threshold value corresponding to orange characteristic information, determining the ratio of the area of the target area corresponding to the orange to the area of the picture, and if the ratio is larger than a preset ratio, determining that the camera is shielded at the moment of obtaining the picture by a server, wherein the color of an object shielding the camera is matched with the orange. If the ratio is smaller than or equal to the preset ratio, the server determines … … for the third time until the ratio corresponding to a certain preset color is larger than the preset ratio. If all the preset colors are determined, if the ratio corresponding to a certain preset color is still not larger than the preset ratio, it is determined that the camera is not shielded at the moment of acquiring the picture.
Because the ratio corresponding to the preset color is larger than the preset ratio when the actual application scene is determined for the first time, that is, it can be determined that the camera is blocked at the moment of acquiring the picture, based on the situation, the server does not need to perform subsequent determination of the preset color, and the calculation resource of the server is saved.
In another optional implementation, the server may first divide the picture into a target region and a non-target region corresponding to each color based on a threshold value corresponding to the feature information of each preset color, and determine a ratio corresponding to each color according to an area of the target region corresponding to each color and an area of the picture. The ratio corresponding to each color is sorted from large to small according to the numerical value, the ratio with the largest numerical value is compared with a preset ratio, if the ratio is larger than the preset ratio, the server can determine that the camera is shielded at the moment of acquiring the picture, and the color of the object shielding the camera is matched with the preset color corresponding to the ratio with the largest numerical value. And if the ratio is smaller than or equal to the preset ratio, determining that the camera is not shielded at the moment of acquiring the picture.
In the embodiment of the application, the situation that the video anchor continuously covers the camera by hands for 10 seconds may occur in practical application scenes. After the terminal uploads the video to the server in real time, the server intercepts 5 to 6 pictures within 10 seconds, and if the first picture is determined, the server obtains a result that the camera is determined to be shielded at the moment of acquiring the first picture based on the ratio corresponding to the orange, and the color of an object shielding the camera is matched with the orange. Obviously, the same results can be obtained for the subsequent pictures.
Based on the above application scenario, in an optional implementation manner of the present application, when the server determines whether the camera is blocked at the time of acquiring the current picture, the previous determination result may be used as the current reference condition.
Optionally, if it is assumed that the server obtains a result based on the previous picture that the camera is blocked at the time of obtaining the picture, and a color of an object blocking the camera is matched with green, a first preset color of the current picture is green, the current picture is divided into a target region and a non-target region based on a threshold value corresponding to green characteristic information, a ratio of an area of the target region corresponding to the green to an area of the current picture is determined, if the ratio is greater than the preset ratio, the server may determine that the camera is blocked at the time of obtaining the picture, and the color of the object blocking the camera is matched with the green, and if the ratio is less than or equal to the preset ratio, the server takes any preset color in other preset colors as a second determined color.
Optionally, assuming that a result obtained by the server based on the previous picture is that the camera is not blocked at the time of obtaining the picture, sorting the ratio corresponding to each preset color from large to small according to the numerical value, and obtaining the sequence of the corresponding preset colors. And after the current picture is obtained, detecting whether the camera is shielded according to the previous sequence of preset colors.
Therefore, the detection based on the pictures of the whole video can reduce the calculation amount of the server, and further, the calculation amount of the server is greatly reduced for the massive pictures generated by massive videos. Meanwhile, the server is used for determining whether the camera of the terminal is shielded or not, so that the manpower resource is saved, and the working efficiency is greatly improved.
In the embodiment of the application, it is stated above that the server is used as an execution subject to determine whether the camera is occluded when the picture is acquired, and the terminal is used as an execution subject to determine whether the camera is occluded when the picture is acquired, so that reference can be made to the server.
In an optional implementation manner, the server determines that the camera is blocked at the time of acquiring the picture, and may send a prompt message to the terminal. Generally, the speed of uploading a video to a server by a terminal can be fast, and the process of determining whether a camera is blocked at the moment of acquiring an image by the server by using the implementation method is also fast, so that the time interval from uploading the video to the server by the terminal to acquiring a prompt message by the terminal is extremely short. The video anchor can be prompted by the prompt information when the video anchor does not perceive that the camera is blocked by the video anchor, and a good use environment is provided for the anchor and a user.
In another optional implementation, the terminal determines that the camera is blocked at the time of acquiring the picture, and may send a prompt to prompt the anchor.
After the terminal receives the prompt message or determines that the camera is shielded at the moment of acquiring the picture, optionally, the shielding of the anchor camera can be prompted by slight vibration of the terminal. Optionally, a light module on the terminal may be combined to prompt the anchor camera to be shielded by a slight flash. So, can monitor the live broadcast video through the process of judging whether the camera is sheltered from, use manpower resources sparingly when improving live broadcast video monitoring efficiency.
Fig. 8 is a schematic diagram of a camera occlusion detection display interface provided in an embodiment of the present application, and includes an interface 801, where the display interface may be displayed on a screen of a terminal or a screen connected to a server. The display interface 801 includes an affiliated project entry box 802, a task name entry box 803, a file entry box 804, and a submit analysis click box 805. Optionally, the item name may be input in the item input box 802, for example, a video is captured by the terminal through the application a and uploaded to the server, and therefore, the application a is input in the item input box. The task name input box 803 may be the name of the video. The file entry box 804 may be video in the format of rm, rmvb, mp4, wmv, asf, asx, 3gb, mov, and m4v, etc.
The embodiment of the present application further provides a camera shelter from detection device, fig. 9 is a schematic structural diagram of a camera shelter from detection device provided by the embodiment of the present application, as shown in fig. 9, the device includes:
the region distinguishing module 901 is configured to divide the picture into a target region and a non-target region according to a threshold value corresponding to feature information of a preset color; the target area is associated with a preset color, and the picture is acquired through a camera;
the ratio determining module 902 is configured to determine a ratio of an area of the target region to an area of the picture;
the judgment module 903 is configured to determine that the camera is blocked at the time of obtaining the picture and that the color of the object blocking the camera is matched with the preset color if the ratio is greater than the preset ratio.
In an alternative embodiment, the apparatus further comprises:
the area distinguishing module 901 is configured to determine an upper threshold value and a lower threshold value corresponding to feature information of a preset color;
determining a threshold interval corresponding to the preset color according to the upper threshold value and the lower threshold value;
if the characteristic value corresponding to the characteristic information of the pixel point contained in the picture is located in the threshold interval, changing the color of the pixel point into a first color; otherwise, changing the color of the pixel point to a second color;
determining the area where the pixel points of the first color are located as a target area;
and determining the area where the pixel point of the second color is located as a non-target area.
In an alternative embodiment, the apparatus further comprises:
the ratio determining module 902 is configured to determine a contour line of the target region;
determining the area of a target area according to the pixel points corresponding to the contour lines;
determining the area of the picture according to pixel points included in the picture;
the ratio of the area of the target region to the area of the picture is determined.
In an alternative embodiment, the apparatus further comprises:
the receiving module is used for acquiring a video;
and intercepting the video according to a preset time period to obtain the picture.
In an alternative embodiment, the apparatus further comprises:
the region distinguishing module 901 is configured to divide the picture into a target region and a non-target region corresponding to each color according to a threshold value corresponding to feature information of each color in preset colors;
the ratio determining module 902 is configured to determine a ratio corresponding to each color according to an area of a target region corresponding to each color and an area of a picture; and sorting the ratio corresponding to each color from large to small according to the numerical value.
The judgment module 903 is configured to determine a ratio corresponding to the largest numerical value, and if the ratio is greater than a preset ratio, determine that the camera is blocked at the time of obtaining the picture.
The device and method embodiments in the application embodiments are based on the same application concept.
The method provided by the embodiment of the application can be executed in a computer terminal, a server or a similar operation device. Taking an example of the method performed by the server, fig. 10 is a block diagram of a hardware structure of the server according to the method for detecting occlusion of a camera provided in the embodiment of the present application. As shown in fig. 10, the server 1000 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1010 (the processor 1010 may include but is not limited to a Processing device such as a micro processor MCU or a programmable logic device FPGA), a memory 1030 for storing data, and one or more storage media 1020 (e.g., one or more mass storage devices) for storing applications 1023 or data 1022. Memory 1030 and storage media 1020 may be transitory or persistent storage, among other things. The program stored in the storage medium 1020 may include one or more modules, each of which may include a series of instruction operations for a server. Still further, the central processor 1010 may be configured to communicate with the storage medium 1020 and execute a series of instruction operations in the storage medium 1020 on the server 1000. The server 1000 may also include one or more power supplies 1060, one or more wired or wireless network interfaces 1050, one or more input-output interfaces 1040, and/or one or more operating systems 1021, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
Input-output interface 1040 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server 1000. In one example, input/output Interface 1040 includes a Network adapter (NIC) that may be coupled to other Network devices via a base station to communicate with the internet. In one example, the input/output interface 1040 may be a Radio Frequency (RF) module for communicating with the internet in a wireless manner.
It will be understood by those skilled in the art that the structure shown in fig. 10 is merely illustrative and is not intended to limit the structure of the electronic device. For example, server 1000 may also include more or fewer components than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
Embodiments of the present application further provide a storage medium, where the storage medium may be disposed in a server to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a camera occlusion detection method in the method embodiments, and the at least one instruction, the at least one program, the code set, or the set of instructions are loaded and executed by the processor to implement the camera occlusion detection method provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, which can store program codes.
As can be seen from the embodiments of the camera occlusion detection method, device, electronic device, or storage medium provided in the present application, a picture is divided into a target region and a non-target region according to a threshold value corresponding to feature information of a preset color; the target area is associated with a preset color, and the picture is acquired through a camera; determining the ratio of the area of the target area to the area of the picture; if the ratio is larger than the preset ratio, it is determined that the camera is shielded at the moment of acquiring the picture, and the color of the object shielding the camera is matched with the preset color. So, can monitor the live broadcast video through the process of whether being sheltered from to the camera, use manpower resources sparingly when improving live broadcast video monitoring efficiency.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and portions that are similar to each other in the embodiments are referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A camera occlusion detection method, the method comprising:
dividing the picture into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color; the target area is associated with the preset color, and the picture is acquired through a camera;
determining a ratio of the area of the target region to the area of the picture;
if the ratio is larger than a preset ratio, it is determined that the camera is shielded at the moment of acquiring the picture, and the color of an object shielding the camera is matched with the preset color.
2. The method according to claim 1, wherein the dividing the picture into the target area and the non-target area according to the threshold value corresponding to the feature information of the preset color comprises:
determining an upper threshold value and a lower threshold value corresponding to the characteristic information of the preset color;
determining a threshold interval corresponding to the preset color according to the upper threshold value and the lower threshold value;
if the characteristic value corresponding to the characteristic information of the pixel point contained in the picture is located in the threshold interval, changing the color of the pixel point into a first color; otherwise, changing the color of the pixel point to a second color;
determining the area where the pixel point of the first color is located as the target area;
and determining the area where the pixel point of the second color is located as the non-target area.
3. The method of claim 1, wherein determining the ratio of the area of the target region to the area of the picture comprises:
determining the contour line of the target area;
determining the area of the target area according to the pixel points corresponding to the contour lines;
determining the area of the picture according to pixel points included in the picture;
determining a ratio of an area of the target region to an area of the picture.
4. The method of claim 1, further comprising:
the characteristic information includes hue, saturation, and lightness.
5. The method of claim 1, further comprising:
acquiring a video;
and intercepting the video according to a preset time period to obtain the picture.
6. The method of claim 1,
the dividing the picture into a target area and a non-target area according to the threshold value corresponding to the feature information of the preset color comprises:
dividing the picture into a target area and a non-target area corresponding to each color according to a threshold value corresponding to the characteristic information of each color in preset colors;
the determining a ratio of the area of the target region to the area of the picture comprises:
determining a ratio corresponding to each color according to the area of the target region corresponding to each color and the area of the picture;
sorting the ratio corresponding to each color from large to small according to numerical values;
if the ratio is greater than a preset ratio, determining that the camera is shielded at the moment of acquiring the picture, including:
and determining the ratio corresponding to the maximum numerical value, and if the ratio is greater than a preset ratio, determining that the camera is shielded at the moment of acquiring the picture.
7. A camera occlusion detection device, the device comprising:
the area distinguishing module is used for dividing the picture into a target area and a non-target area according to a threshold value corresponding to the characteristic information of the preset color; the target area is associated with the preset color, and the picture is acquired through a camera;
a ratio determination module for determining a ratio of the area of the target region to the area of the picture;
and the judging module is used for determining that the camera is shielded at the moment of acquiring the picture and the color of the object shielding the camera is matched with the preset color if the ratio is greater than the preset ratio.
8. The apparatus of claim 7,
the area distinguishing module is used for determining an upper threshold value and a lower threshold value corresponding to the characteristic information of the preset color; determining a threshold interval corresponding to the preset color according to the upper threshold value and the lower threshold value; if the characteristic value corresponding to the characteristic information of the pixel point contained in the picture is located in the threshold interval, changing the color of the pixel point into a first color; otherwise, changing the color of the pixel point to a second color; determining the area where the pixel point of the first color is located as the target area; and determining the region where the pixel point of the second color is located as the non-target region.
9. An electronic device, comprising a processor and a memory, wherein the memory has stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the camera occlusion detection method according to any of claims 1-6.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the camera occlusion detection method of any of claims 1-6.
CN201910537620.1A 2019-06-20 2019-06-20 Camera shielding detection method and device, electronic equipment and storage medium Pending CN112116551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910537620.1A CN112116551A (en) 2019-06-20 2019-06-20 Camera shielding detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910537620.1A CN112116551A (en) 2019-06-20 2019-06-20 Camera shielding detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112116551A true CN112116551A (en) 2020-12-22

Family

ID=73796065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910537620.1A Pending CN112116551A (en) 2019-06-20 2019-06-20 Camera shielding detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112116551A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379705A (en) * 2021-06-09 2021-09-10 苏州智加科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113469173A (en) * 2020-03-31 2021-10-01 珠海格力电器股份有限公司 Signal lamp shielding detection method and device, terminal and computer readable medium
CN113705332A (en) * 2021-07-14 2021-11-26 深圳市有为信息技术发展有限公司 Method and device for detecting shielding of camera of vehicle-mounted terminal, vehicle-mounted terminal and vehicle
JP2022106926A (en) * 2021-08-16 2022-07-20 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Camera shielding detection method, device, electronic apparatus, storage medium and computer program
CN114952809A (en) * 2022-06-24 2022-08-30 中国科学院宁波材料技术与工程研究所 Workpiece identification and pose detection method and system and grabbing control method of mechanical arm

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469173A (en) * 2020-03-31 2021-10-01 珠海格力电器股份有限公司 Signal lamp shielding detection method and device, terminal and computer readable medium
CN113379705A (en) * 2021-06-09 2021-09-10 苏州智加科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113705332A (en) * 2021-07-14 2021-11-26 深圳市有为信息技术发展有限公司 Method and device for detecting shielding of camera of vehicle-mounted terminal, vehicle-mounted terminal and vehicle
JP2022106926A (en) * 2021-08-16 2022-07-20 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Camera shielding detection method, device, electronic apparatus, storage medium and computer program
EP4064212A3 (en) * 2021-08-16 2022-12-28 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for detecting camera occlusion, device, storage medium and program product
CN114952809A (en) * 2022-06-24 2022-08-30 中国科学院宁波材料技术与工程研究所 Workpiece identification and pose detection method and system and grabbing control method of mechanical arm

Similar Documents

Publication Publication Date Title
CN112116551A (en) Camera shielding detection method and device, electronic equipment and storage medium
CN108600781B (en) Video cover generation method and server
US9117112B2 (en) Background detection as an optimization for gesture recognition
CN107241556B (en) Light measuring method and device of image acquisition equipment
US10728510B2 (en) Dynamic chroma key for video background replacement
CN109151436B (en) Data processing method and device, electronic equipment and storage medium
WO2022227308A1 (en) Image processing method and apparatus, device, and medium
CN112241714B (en) Method and device for identifying designated area in image, readable medium and electronic equipment
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN110392306B (en) Data processing method and equipment
CN106651797B (en) Method and device for determining effective area of signal lamp
CN109257609B (en) Data processing method and device, electronic equipment and storage medium
CN108347427A (en) A kind of video data transmission, processing method, device and terminal, server
CN104145477B (en) Adjust the method and system of color
CN109274976B (en) Data processing method and device, electronic equipment and storage medium
CN112001274A (en) Crowd density determination method, device, storage medium and processor
CN111369486A (en) Image fusion processing method and device
CN114187541A (en) Intelligent video analysis method and storage device for user-defined service scene
CN113793366A (en) Image processing method, device, equipment and storage medium
CN111797694B (en) License plate detection method and device
CN113038002A (en) Image processing method and device, electronic equipment and readable storage medium
CN111754492A (en) Image quality evaluation method and device, electronic equipment and storage medium
CN113191376A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN114881886A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112714299B (en) Image display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination