CN115661735A - Target detection method and device and computer readable storage medium - Google Patents

Target detection method and device and computer readable storage medium Download PDF

Info

Publication number
CN115661735A
CN115661735A CN202211049159.3A CN202211049159A CN115661735A CN 115661735 A CN115661735 A CN 115661735A CN 202211049159 A CN202211049159 A CN 202211049159A CN 115661735 A CN115661735 A CN 115661735A
Authority
CN
China
Prior art keywords
target
interference
area
image
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211049159.3A
Other languages
Chinese (zh)
Inventor
李宁钏
严谨
熊剑平
孙海涛
赵蕾
杨剑波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211049159.3A priority Critical patent/CN115661735A/en
Publication of CN115661735A publication Critical patent/CN115661735A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a target detection method, a target detection device and a computer readable storage medium, wherein the target detection method comprises the following steps: acquiring an image to be identified; identifying the area where the interference target in the image to be identified is located to obtain an interference target area; carrying out pixel filling on an interference target area in an image to be recognized to obtain a processed image to be recognized; carrying out saliency target detection on the processed image to be identified to obtain a saliency target; and confirming whether the significant target is the target object or not, and generating a target object detection result. Through the mode, the accuracy of target object detection can be improved.

Description

Target object detection method and device and computer readable storage medium
Technical Field
The present application relates to the field of target detection technologies, and in particular, to a target detection method, apparatus, and computer-readable storage medium.
Background
In the places in the fields of electric power energy, petrochemical industry, rail transit or production and the like, in order to ensure the normal operation and production safety of equipment, various target objects in an equipment monitoring scene need to be monitored; however, the target species in the actual application scene are various, and the real material with the target is few, so that the target detection accuracy is low.
Disclosure of Invention
The application provides a target object detection method, a target object detection device and a computer-readable storage medium, which can improve the target object detection precision.
In order to solve the technical problem, the technical scheme adopted by the application is as follows: provided is a target detection method including: acquiring an image to be identified; identifying the area where the interference target in the image to be identified is located to obtain an interference target area; carrying out pixel filling on an interference target area in an image to be recognized to obtain a processed image to be recognized; carrying out saliency target detection on the processed image to be identified to obtain a saliency target; and confirming whether the significant target is the target object or not, and generating a target object detection result.
In order to solve the above technical problem, another technical solution adopted by the present application is: the object detection device comprises a memory and a processor which are connected with each other, wherein the memory is used for storing a computer program, and the computer program is used for realizing the object detection method in the technical scheme when being executed by the processor.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer-readable storage medium for storing a computer program for implementing the object detection method in the above-described technical solution when the computer program is executed by a processor.
Through the scheme, the beneficial effects of the application are that: the method for improving the detection precision of the target object by expanding the type of the target object is time-consuming, labor-consuming and poor in effect, the interference target can be identified, the influence of the interference target area on subsequent target object detection can be reduced by filling pixels in the interference target area, the detection precision of the target object is effectively improved, the generalization capability of the target object detection is improved, the type of the interference target can be set in a user-defined mode according to actual conditions, and the target object detection method can be suitable for different application scenes and is high in flexibility; in addition, the salient objects can be screened to determine whether the salient objects are the target objects, so that the detection precision of the target objects can be further improved, and the false alarm rate is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for detecting a target object provided herein;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a method for detecting a target object provided herein;
FIG. 3 is a schematic flow chart diagram illustrating an embodiment of step 23 provided herein;
FIG. 4 is a schematic diagram of an image to be recognized in an application scene provided by the present application;
FIG. 5 is a schematic illustration of a mask image of an interference target provided herein;
FIG. 6 is a schematic diagram of a processed image to be recognized provided herein;
FIG. 7 is a schematic flow chart diagram illustrating one embodiment of step 25 provided herein;
FIG. 8 is a schematic illustration of a mask image of a saliency target provided by the present application;
FIG. 9 is a schematic illustration of a mask image of a saliency target provided by the present application;
FIG. 10 is a schematic diagram of an embodiment of a target detection apparatus provided in the present application;
FIG. 11 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples and not all examples of the present application, and all other examples obtained by a person of ordinary skill in the art without any inventive work are within the scope of the present application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
It should be noted that the terms "first", "second" and "third" in the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of indicated technical features. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic flowchart of an embodiment of a target object detection method provided in the present application, the method including:
step 11: and acquiring an image to be identified.
The image to be recognized can be acquired by a monitoring device arranged in a monitoring scene, and the monitoring scene can be selected according to actual application requirements, for example: a production plant, a rail or a chemical plant, etc., without limitation.
Step 12: and identifying the area where the interference target in the image to be identified is located to obtain an interference target area.
The method comprises the steps of identifying an area where an interference target in an image to be identified is located to obtain an interference target area, wherein the interference target can be a target object which causes interference on target object detection, the target object can be a target object which is required to be detected in different application scenes, the interference target can be a target object which is definitely not in a target object detection category, the interference target can be preset, and then the interference of the interference target on the target object detection is eliminated in the subsequent target object detection process.
Specifically, the image to be recognized may be processed by using a target detection algorithm, so as to obtain an interference target region, or the image to be recognized may be input into the target detection network model, so as to obtain the interference target region, where the target detection technology used to recognize the interference target region is not limited. It is to be understood that the above target detection network model may be obtained by training with a training sample set containing the interfering target, and the target detection network model has the capability of identifying the interfering target.
Step 13: and performing pixel filling on the interference target area in the image to be recognized to obtain the processed image to be recognized.
Pixel filling is carried out on an interference target area in the image to be recognized, and a processed image to be recognized is obtained; specifically, the embodiment can detect the target object in the image to be recognized by using a significant target detection technology, and due to the existence of the interference target, the interference target is easily judged as the target object by mistake, so that the detection precision of the target object is affected, and the interference target area can be weakened by performing pixel filling on the interference target area, so that the influence of the pixel filling on the detection precision of the target object is reduced; it can be understood that, the corresponding pixel value may be selected according to an actual application situation to fill the interference target area, which is not limited herein.
Step 14: and carrying out saliency target detection on the processed image to be identified to obtain a saliency target.
And performing saliency target detection on the processed image to be identified to obtain a saliency target, wherein saliency target detection can be realized by using a saliency target detection technology in the technical field of target detection, and the saliency target detection technology is not detailed or limited herein.
Step 15: and confirming whether the significant target is the target object or not, and generating a target object detection result.
After the salient objects are identified, the salient objects can be screened, and whether the salient objects are the objects or not can be further confirmed, so that an object detection result is generated, and the accuracy of object detection is improved, wherein the object detection result can include the position or type information of the objects.
Specifically, a significant target in a state of significant color or significant motion can be detected by using a significant target detection technology, but the identified significant target may be other non-target objects besides the limited interfering target set above, and then whether the significant target is the target object or not can be confirmed according to the characteristics of the target object, or the non-target object can be excluded according to the characteristics of the significant non-target object, wherein the characteristics of the target object and the characteristics of the significant non-target object can be selected according to actual requirements; taking an application scene of a food production workshop as an example, the target detection method in this embodiment may be used for monitoring food production safety to monitor whether a target object threatening food production appears in the scene, the type of the target object cannot be estimated in advance, but experience shows that winged insects are a relatively clear common target object, because the winged insects appear in the food production workshop and have characteristics of high moving speed, small size and the like, and whether the winged insects are obvious targets can be judged according to characteristics of a motion trajectory and/or size and the like of a significant target: flying insects to give an alarm in time when flying insects appear.
It is understood that the target detection method in the present embodiment can be applied to various application scenarios other than the food production safety detection exemplified above, for example: the detection of foreign objects or drops in a certain monitoring scene is not limited herein.
Because the target objects in the actual application scene are various, the method for improving the detection precision of the target objects by expanding the types of the target objects is time-consuming, labor-consuming and poor in effect, the embodiment can reduce the influence of the interference target area on the subsequent detection of the target objects by identifying the interference target and performing pixel filling on the interference target area, so that the detection precision of the target objects is effectively improved, the generalization capability of the detection of the target objects is improved, the types of the interference target can be set by self-definition according to the actual situation, and the target object detection method can be suitable for different application scenes and is high in flexibility; in addition, the salient objects can be screened to determine whether the salient objects are the objects, so that the detection precision of the objects can be further improved, and the false alarm rate can be reduced.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of a target object detection method provided in the present application, the method including:
step 21: and acquiring an image to be identified.
Step 21 is the same as step 11 in the above embodiment, and is not described again.
Step 22: and identifying the interference target in the image to be identified by using the target segmentation model, and segmenting the region where the interference target is located to obtain an interference target region.
Identifying an interference target in the image to be identified by using a target segmentation model, and segmenting a region where the interference target is located to obtain an interference target region; specifically, the target segmentation model can be obtained by training with a preset training sample set, wherein the preset training sample set comprises a plurality of training samples, and the training samples comprise sample targets with the same types as the interference targets; the preset training sample set can be constructed or obtained according to actual conditions, and is not limited herein; taking an interference target as an example, the target segmentation model can be trained by using a training sample containing a portrait so that the target segmentation model learns the recognition capability of the human; it is to be understood that the object segmentation model may be an object segmentation network model commonly found in the field of object detection technology, such as: the Unet segments the network, and the type and specific structure of the target segmentation model are not limited herein.
Furthermore, the type and the number of the interference targets can be set according to actual conditions, the interference targets can be one, two or more, when the interference targets are at least two, the target segmentation model can be trained respectively by using at least two preset training sample sets, so that the target segmentation model obtains the recognition capability of the interference targets of all types, and the training samples in each preset training sample set comprise sample targets of the same type as that of the interference targets.
Taking the target object as a foreign object appearing in the image to be recognized, that is, a target not expected to appear in the monitored scene as an example, the interference target may be a target object that causes interference to the detection of the foreign object, which may be a target object frequently appearing in the current monitored scene or a target object definitely not belonging to the category of the detection of the foreign object, such as a person or a vehicle, for example: the working personnel who come and go often appear in a certain workshop, and the staff does not influence the workshop normal production, and it is not in the foreign matter detection scope, and the staff who comes and goes probably is judged as the foreign matter target by mistake when the foreign matter target detects, then can set up the people as the interference target in advance this moment, then in the process of follow-up foreign matter target detection, gets rid of the interference of people to the detection of foreign matter target.
If the target segmentation model does not identify the interfering target, it indicates that there is no interfering target in the image to be identified, and then the process may directly proceed to step 24.
Step 23: and based on the pixels of the non-interference area in the image to be recognized, carrying out pixel filling on the interference target area to obtain the processed image to be recognized.
The non-interference area is other image areas except the interference target area in the image to be identified, namely a background area; as shown in fig. 3, step 23 may include steps 31 to 32 described below.
Step 31: and acquiring a reference value of the pixel value of the non-interference area in the image to be identified.
The reference value is determined based on a mode or an average of pixel values of the non-interference area, and specifically, the mode or the average of pixel values of the background area in the image to be identified can be obtained and used as a filled pixel value of the interference target area, so that the interference target area is identified as the background area in the subsequent target detection process, and the interference of the interference target on the target detection is weakened. It is to be understood that, in other embodiments, the variance value or the median value of the pixel values of the non-interference region may also be used, and is not limited herein.
Step 32: and filling the interference target area by using the reference value of the pixel value to obtain the processed image to be identified.
Filling the interference target area by using the reference value of the pixel value to obtain a processed image to be identified; referring to fig. 4, taking the to-be-identified image in the application scene shown in fig. 4 as an example, the interference target is a person, the to-be-identified image may be segmented by using the target segmentation model to obtain the mask image of the interference target shown in fig. 5, where the dark color region is the interference target region, and the other regions except the dark color region are the background regions, and the reference value of the pixel value of the background region may be obtained, and then the reference value of the pixel value is used to fill the pixel value into the interference target region, so as to obtain the to-be-identified image after the processing shown in fig. 6, where the interference target region in fig. 6 is filled with pixels.
In a specific embodiment, pixel filling may be performed on the interference target region directly based on preset pixel information to obtain a processed image to be identified; the preset pixel information may include a pixel value to be filled in the interference target region, which may include one pixel value or multiple pixel values or a pixel arrangement rule, and is not limited herein, and the specific preset pixel information may be set according to an actual situation.
Step 24: and carrying out saliency target detection on the processed image to be identified to obtain a saliency target.
Carrying out saliency target detection on the processed image to be recognized to obtain a saliency target; specifically, a saliency target detection model can be used for carrying out saliency target detection on the processed image to be recognized to obtain a saliency target; in response to the fact that the saliency target is confirmed to be the target object, the to-be-recognized image can be used as a training sample of the saliency target detection model, the saliency target detection model is trained by utilizing the region image where the target object is located and the to-be-recognized image, feedback training is carried out on the saliency target detection model, and the detection precision of the saliency target detection model is further improved.
Specifically, before step 24, the salient object detection model may be trained by using a preset training sample set, so that the salient object detection model has the capability of recognizing the salient object, where the preset training sample set may be constructed or obtained according to actual situations, and is not limited herein. It is to be understood that the salient object detection model may be a network model for salient object detection that is conventional in the field of object detection technology, such as: the U2-Net detection network, the type and specific structure of the saliency target detection model are not detailed and limited herein.
Step 25: and confirming whether the significant target is the target object or not, and generating a target object detection result.
As shown in fig. 7, the following embodiment will explain step 25 by taking foreign object target detection as an example.
Step 71: and acquiring the area of the region where the saliency target is located.
Taking the processed image to be recognized shown in fig. 6 as an example, the saliency target detection model may be used to perform target detection on the image to be recognized, so as to obtain a saliency target, and obtain a mask image of the saliency target shown in fig. 8, where the recognized saliency target is a bird nest, a dark color region in the mask image is a region where the saliency target is located, and a region other than the dark color region is a background region, and at this time, the saliency target in the mask image may be subjected to contour extraction, and then a contour area is calculated, so as to obtain an area of the region where the saliency target is located.
It can be understood that, if the above steps 22 to 23 in this embodiment are not performed, the salient object detection is directly performed on the image to be recognized as shown in fig. 4, so that a mask image as shown in fig. 8 may be obtained, and meanwhile, people and bird nests are recognized, which is easy to cause a misjudgment situation, and through the steps 22 to 23 in this embodiment, a misjudgment situation similar to that in fig. 8 can be avoided.
Step 72: and tracking the salient target to obtain target track information of the salient target.
The salient target can be tracked to obtain target track information of the salient target; specifically, the position information of the area where the salient target is located may be obtained, and then the salient target in the to-be-identified images of the consecutive frames is tracked by using a target tracking algorithm in the technical field of target detection according to the position information, so as to obtain the target trajectory information of the salient target, where details and limitations of the target tracking algorithm are not described herein.
Step 73: and confirming whether the saliency target is a foreign object target or not based on the area and the target track information.
Whether the saliency target is a foreign object may be confirmed based on the area and target trajectory information; specifically, the area of the region where the significant target is located can reflect the size of the significant target, the target trajectory information can reflect the motion state of the significant target, and the foreign object target can be screened according to the size characteristic and the motion characteristic of the significant target.
Specifically, in one embodiment, step 73 may include: determining whether the area is larger than a preset area threshold value and determining whether the target track information meets a preset track condition; and then in response to the area being greater than the preset area threshold and the target trajectory information satisfying the preset trajectory condition, determining that the saliency target is a foreign object target.
Wherein, the target state of the significant target can be determined based on the target track information; and in response to the target state being the motion state, determining that the target track information meets a preset track condition. In the process of detecting the salient object, the background object which originally exists in the background area but has more obvious characteristics may be falsely detected, because the background object in the background area is still or generates a small offset only in a limited area, and the foreign object refers to a foreign object which suddenly appears in the monitoring picture, and the foreign object is generally in an obvious motion state, at this time, whether the salient object moves or not can be judged according to the object track information of the salient object, so that the background object in the background area can be screened out from the salient object. Still taking the image to be recognized shown in fig. 4 as an example, the hob in the background area may be detected as a salient object through salient object detection, and then it can be confirmed that the hob is not a foreign object according to the motion track.
Further, because the monitoring image that the supervisory equipment gathered has the phenomenon of nearly big far and small, and the size of the target that is close to supervisory equipment is great in the picture promptly, and the size of the target that is far away from supervisory equipment is less in the picture, under general condition, only need care about whether the monitoring area that is close to supervisory equipment appears the foreign matter, can neglect the target that the monitoring area appears far away this moment to can screen away the saliency target far away according to the area in the region that the saliency target located, for example: if the salient object detected from the image to be recognized shown in fig. 4 is an airplane flying over a distant building, it can be determined that the salient object is not a foreign object by the area of the area where the airplane is located. It can be understood that the preset area threshold may be set according to an actual application situation, and a specific numerical value of the preset area threshold may be set in a self-defined manner according to a pixel block range occupied by a target appearing in the monitoring image in the vicinity of the monitoring image in an experiment or experience, and is generally at least larger than the range of four pixel blocks, which is not limited herein.
In another embodiment, it can be further determined whether the salient object is the object according to the region where the salient object is located, that is, step 25 may further include: acquiring a preset attention area, and then judging whether the saliency target is in the attention area; confirming that the salient object is the object in response to the salient object being within the region of interest; it can be understood that the region of interest is a defined region for detecting the target object, the target object is calculated only by the target appearing in the defined region, and the region of interest can be set by a user according to the actual application requirements, and is not limited herein.
The embodiment acquires the reference values of the pixel values of other image areas except the interference target area in the image to be recognized, and then fills the interference target area by using the reference values of the pixel values so as to effectively weaken the influence of the interference target area on the target object detection, greatly improve the precision and generalization capability of the target object detection, and greatly save the processing cost due to the simple and efficient pixel filling mode; when the saliency target is confirmed to be the target object, the image to be recognized can be used as a training sample of the saliency target detection model, feedback training of the saliency target detection model is achieved, and the detection precision of the saliency target detection model on the target object is further improved; in addition, the target object detection method can be applied to a foreign object detection scene, and after the significant target is obtained, whether the significant target is the foreign object can be confirmed based on the area and target track information, so that other interference targets in the significant target are eliminated, the false alarm rate is greatly reduced, and the foreign object detection precision is improved.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of the object detection apparatus provided in the present application, the object detection apparatus 100 includes a memory 101 and a processor 102 connected to each other, the memory 101 is used for storing a computer program, and when the computer program is executed by the processor 102, the computer program is used for implementing the object detection method in the foregoing embodiment.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application, where the computer-readable storage medium 110 is used for storing a computer program 111, and the computer program 111 is used for implementing the target object detection method in the foregoing embodiment when being executed by a processor.
The computer readable storage medium 110 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is only one type of logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
The above description is only an example of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (11)

1. A method for detecting a target, comprising:
acquiring an image to be identified;
identifying the area where the interference target in the image to be identified is located to obtain an interference target area;
filling pixels in the interference target area in the image to be recognized to obtain a processed image to be recognized;
carrying out saliency target detection on the processed image to be identified to obtain a saliency target;
and confirming whether the significant target is a target object or not, and generating a target object detection result.
2. The method according to claim 1, wherein the step of performing pixel filling on the interference target region in the image to be recognized to obtain the processed image to be recognized includes:
based on pixels of a non-interference area in the image to be recognized, performing pixel filling on the interference target area to obtain a processed image to be recognized, wherein the non-interference area is an image area except for the interference target area in the image to be recognized; or
And based on preset pixel information, pixel filling is carried out on the interference target area, and a processed image to be identified is obtained.
3. The method according to claim 2, wherein the step of performing pixel filling on the interference target region based on pixels of a non-interference region in the image to be recognized to obtain the processed image to be recognized comprises:
acquiring a reference value of pixel values of the non-interference area in the image to be identified, wherein the reference value is determined based on a mode or an average of the pixel values of the non-interference area;
and filling the interference target area by using the reference value of the pixel value to obtain the processed image to be identified.
4. The method for detecting the target object according to claim 1, wherein the step of identifying the region where the interference target is located in the image to be identified to obtain the interference target region comprises:
identifying an interference target in the image to be identified by using a target segmentation model, and segmenting a region where the interference target is located to obtain an interference target region; the target segmentation model is obtained by training through a preset training sample set, the preset training sample set comprises a plurality of training samples, and the training samples comprise sample targets which are the same as the interference targets in type.
5. The object detection method according to claim 1, wherein the object is a foreign object appearing in the image to be recognized, and the step of confirming whether the salient object is an object and generating an object detection result includes:
acquiring the area of the region where the significant target is located;
tracking the salient target to obtain target track information of the salient target;
confirming whether the saliency target is the foreign object target based on the area and the target trajectory information.
6. The target detection method according to claim 5, wherein the step of confirming whether the saliency target is the foreign object target based on the area and the target trajectory information includes:
determining whether the area is greater than a preset area threshold; and is
Determining whether the target track information meets a preset track condition;
and in response to the area being larger than the preset area threshold and the target track information meeting the preset track condition, determining that the significance target is the foreign object target.
7. The target detection method according to claim 6, wherein the step of determining whether the target trajectory information satisfies a preset trajectory condition includes:
determining a target state of the salient target based on the target trajectory information;
and in response to the target state being a motion state, determining that the target track information meets the preset track condition.
8. The method of claim 1, wherein the step of confirming whether the salient object is an object and generating an object detection result further comprises:
acquiring a preset attention area;
judging whether the saliency target is within the attention area;
in response to the salient object being within the region of interest, confirming the salient object as the object.
9. The method according to claim 1, wherein the step of detecting the saliency target of the processed image to be recognized to obtain the saliency target includes:
carrying out saliency target detection on the processed image to be recognized by utilizing a saliency target detection model to obtain a saliency target;
the method further comprises the following steps:
and in response to the fact that the salient target is the target object, training the salient target detection model by using the area image where the target object is located and the image to be recognized.
10. An object detection device comprising a memory and a processor connected to each other, wherein the memory is configured to store a computer program which, when executed by the processor, is configured to carry out the object detection method according to any one of claims 1 to 9.
11. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, is configured to implement the object detection method of any one of claims 1-9.
CN202211049159.3A 2022-08-30 2022-08-30 Target detection method and device and computer readable storage medium Pending CN115661735A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211049159.3A CN115661735A (en) 2022-08-30 2022-08-30 Target detection method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211049159.3A CN115661735A (en) 2022-08-30 2022-08-30 Target detection method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115661735A true CN115661735A (en) 2023-01-31

Family

ID=85024436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211049159.3A Pending CN115661735A (en) 2022-08-30 2022-08-30 Target detection method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115661735A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740197A (en) * 2023-08-11 2023-09-12 之江实验室 External parameter calibration method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740197A (en) * 2023-08-11 2023-09-12 之江实验室 External parameter calibration method and device, storage medium and electronic equipment
CN116740197B (en) * 2023-08-11 2023-11-21 之江实验室 External parameter calibration method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109886130B (en) Target object determination method and device, storage medium and processor
Kulchandani et al. Moving object detection: Review of recent research trends
CN107527009B (en) Remnant detection method based on YOLO target detection
US7822275B2 (en) Method for detecting water regions in video
CN108230364B (en) Foreground object motion state analysis method based on neural network
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
CN111091098B (en) Training method of detection model, detection method and related device
JP6764481B2 (en) Monitoring device
US10271018B2 (en) Method of detecting critical objects from CCTV video using metadata filtering
CN111325048B (en) Personnel gathering detection method and device
Salehi et al. An automatic video-based drowning detection system for swimming pools using active contours
CN101344922A (en) Human face detection method and device
CN108830204B (en) Method for detecting abnormality in target-oriented surveillance video
CN114708555A (en) Forest fire prevention monitoring method based on data processing and electronic equipment
CN113378675A (en) Face recognition method for simultaneous detection and feature extraction
CN111259718A (en) Escalator retention detection method and system based on Gaussian mixture model
CN105469054A (en) Model construction method of normal behaviors and detection method of abnormal behaviors
CN115661735A (en) Target detection method and device and computer readable storage medium
KR20140132140A (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
CN115661698A (en) Escalator passenger abnormal behavior detection method, system, electronic device and storage medium
CN115205581A (en) Fishing detection method, fishing detection device and computer readable storage medium
CN114419493A (en) Image annotation method and device, electronic equipment and storage medium
US20200394802A1 (en) Real-time object detection method for multiple camera images using frame segmentation and intelligent detection pool
CN117475353A (en) Video-based abnormal smoke identification method and system
JP2024516642A (en) Behavior detection method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination