CN114219991A - Target detection method, device and computer readable storage medium - Google Patents

Target detection method, device and computer readable storage medium Download PDF

Info

Publication number
CN114219991A
CN114219991A CN202111511730.4A CN202111511730A CN114219991A CN 114219991 A CN114219991 A CN 114219991A CN 202111511730 A CN202111511730 A CN 202111511730A CN 114219991 A CN114219991 A CN 114219991A
Authority
CN
China
Prior art keywords
target
image
detection
frame
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111511730.4A
Other languages
Chinese (zh)
Inventor
金粲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Peitian Robotics Group Co Ltd
Original Assignee
Anhui Peitian Robotics Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Peitian Robotics Group Co Ltd filed Critical Anhui Peitian Robotics Group Co Ltd
Priority to CN202111511730.4A priority Critical patent/CN114219991A/en
Publication of CN114219991A publication Critical patent/CN114219991A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application discloses a target detection method, a device and a computer readable storage medium, wherein the target detection method comprises the following steps: acquiring an image to be processed; carrying out target identification on the image to be processed to obtain a boundary frame of a detection target in the image to be processed and feature points on the detection target; and determining the target characteristics of the detection target according to the boundary frame and the characteristic points. The target detection method provided by the application can acquire more target characteristics of the detected target.

Description

Target detection method, device and computer readable storage medium
Technical Field
The present application relates to the field of target detection technologies, and in particular, to a target detection method, an apparatus, and a computer-readable storage medium.
Background
When detecting an object by using a conventional object detection algorithm, as shown in fig. 1, the finally outputted position information generally includes only the position information of the rectangular frame 11 framing the object 10, including the center coordinates of the rectangular frame 11 and the width and height of the rectangular frame 11. This approach can only locate the target 10, and cannot provide more information about the characteristics of the target 10, which has limitations.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus and a computer-readable storage medium for object detection, which can obtain more object features of a detected object.
A first aspect of an embodiment of the present application provides a target detection method, where the method includes: acquiring an image to be processed; carrying out target identification on the image to be processed to obtain a boundary frame of a detection target in the processed image and feature points on the detection target; and determining the target characteristics of the detection target according to the boundary box and the characteristic points.
A second aspect of the embodiments of the present application provides an object detection apparatus, which includes a processor, a memory, and a communication circuit, where the processor is respectively coupled to the memory and the communication circuit, the memory stores program data, and the processor implements the steps in the above method by executing the program data in the memory.
A third aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, the computer program being executable by a processor to implement the steps in the above method.
The beneficial effect of this application is: according to the target detection method, when the target of the image to be processed is identified, the boundary frame of the detection target is obtained, and the feature points on the detection target are also identified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a diagram of objects and rectangular boxes in an existing application scenario;
FIG. 2 is a schematic diagram of an object and a rectangular box in another prior application scenario;
FIG. 3 is a schematic diagram of an object in an application scenario;
FIG. 4 is a schematic flow chart diagram illustrating an embodiment of a target detection method of the present application;
FIG. 5 is a schematic diagram of the present application detecting targets, bounding boxes, and feature points;
FIG. 6 is a partial schematic flow chart diagram of an embodiment of a target detection method of the present application;
FIG. 7 is a schematic flowchart of step S120 in FIG. 4;
FIG. 8 is a schematic diagram of an image to be processed in an application scenario of the present application;
FIG. 9 is a schematic structural diagram of an embodiment of an object detection device according to the present application;
FIG. 10 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, for some tasks that need to be captured according to the target direction, because a simple rectangular frame 11 (as shown in fig. 1) cannot provide direction information, a scheme for detecting an oblique frame is proposed for this purpose, as shown in fig. 2, at this time, an angle theta is added in output information of the rectangular frame 11, that is, the target 10 is visually represented by being framed by the oblique frame 12, and at this time, the output information includes the center coordinates of the oblique frame 12, the width and height of the oblique frame 12, and the oblique angle of the oblique frame 12.
Although the angle of the target 10 can be obtained by the above scheme, the inclined frame 12 needs to be adjusted by one more angle than the rectangular frame 11 in the training stage, which increases the burden of image marking, and as shown in fig. 3, for some asymmetric small ornaments without obvious corners, such as ornaments with circular arcs in appearance, it is difficult to frame the ornaments by one inclined frame 12. Therefore, aiming at the defects in the prior art, the following scheme is provided:
referring to fig. 4, in an embodiment of the present application, a target detection method includes:
s110: and acquiring an image to be processed.
The image to be processed can be an image acquired by the target detection device through any way, and can be a color image or a gray image.
S120: and carrying out target identification on the image to be processed to obtain a boundary frame of the detection target in the processed image and the characteristic points on the detection target.
The detection target on the image to be processed may be any type of object such as a human, an animal, or a car. And the number of detection targets in the image to be processed may be one, two, three, or more.
The bounding box is a circumscribed rectangle of the detection target, the feature points are located on the detection target, and the feature points are usually some special points on the detection target, such as corner points, or points on the contour of the detection target. For example, in the application scenario of fig. 5, the bounding box is a circumscribed rectangle 22 of the detection target 21, and the feature points are several corner points 23 on the detection target 21.
S130: and determining the target characteristics of the detection target according to the boundary frame and the characteristic points.
It can be understood that, when there are a plurality of detection targets in the image to be processed, the target feature of the detection target is determined according to the bounding box and the feature point of the same detection target.
Specifically, the detection target can be positioned through the bounding box, and the angle, the placing surface and other information of the detection target can be determined through the characteristic points. For example, a reference direction is determined on the image to be processed, and an included angle between a straight line where two predetermined feature points on the detection target are located and the reference direction is determined as an angle of the detection target, and for example, if the detection target includes front and back surfaces which are arranged oppositely, the feature points exist on the front surface, and the feature points do not exist on the back surface, therefore, when the feature points do not exist on the detection target in the image to be processed, it is determined that the back surface of the detection target faces upward, and when the feature points exist on the detection target in the image to be processed, it is determined that the front surface of the detection target faces upward.
That is to say, compare prior art, this application not only can discern the bounding box of detection target, still can discern the characteristic point on the detection target to combine bounding box and characteristic point, both can fix a position the detection target, also can acquire more target characteristics such as angle, the face of putting of detection target.
In the present embodiment, in order to improve the speed and accuracy of image recognition, a pre-trained target detection network is used to perform target recognition on an image to be processed, so as to obtain a bounding box and feature points of a detection target.
Specifically, the target detection network is trained in advance to achieve convergence, and can identify a boundary box of a detection target in the received image to be processed and feature points on the detection target, specifically, after the image to be processed is input into the target detection network, the target detection network outputs coordinates of a center point of the boundary box, width and height of the boundary box and coordinates of each feature point.
In other embodiments, one target detection network may be used to perform target identification on the image to be processed, and after the boundary frame of the detection target is obtained to locate the detection target, another target detection network may be used to identify the feature points on the detection target.
It can be understood that, in the embodiment, the same target detection network is used for performing target identification on the image to be processed, and compared with the case that two target detection networks are used for performing target identification on the image to be processed in sequence, the speed of image identification can be increased, and the boundary frame and the feature point of the same detection target can also be associated.
Referring to fig. 6, in the present embodiment, the target detection network is also trained, and the training process includes:
s140: a sample image is acquired, the sample image including a sample target.
S150: the sample image is marked with a sample bounding box of the sample object and sample feature points on the sample object.
S160: and taking the sample image as input, taking a sample boundary box and sample characteristic points of the sample target as marking information, and training the target detection network.
Specifically, in the training process, after a sample image is obtained, real information of the sample target, including an external rectangular frame of the sample target (i.e., a sample boundary frame) and feature points on the sample target (i.e., sample feature points), is labeled on the sample image, after the sample image is input into the target detection network, the target detection network is trained according to the labeled information of the sample image, in the training process, the boundary frame output by the target detection network is gradually close to the sample boundary frame, and the output feature points are gradually close to the sample feature points until the training requirement is met.
After the target detection network is trained, a weight file is generated, the weight file comprises various parameters for processing the received image by the target detection network, then when the image to be processed needs to be identified by the target detection network, the weight file is loaded into the target detection network, and the target detection network loaded with the weight file can process the image to be processed.
In the prior art, only the boundary box of the detection target needs to be identified, so in the process of training the network, the labeling information only includes the sample boundary box, and at this time, the format of the labeling information is "category x y w h", where the category indicates the category of the sample target in the sample boundary box (for example, person, car, cat, dog, or the like), and x, y, w, and h sequentially indicate the abscissa and ordinate of the center point of the sample boundary box and the width and height of the sample boundary box.
In the present embodiment, during the process of training the network, the label information includes a sample boundary box and sample feature points, and the format of the label information is "category x y w h x1 y1 … … xn yn", where the meanings of category, x, y, w, and h are the same as those in the prior art, x1 and y1 respectively represent the abscissa and ordinate of the sample feature point labeled 1, xn and yn respectively represent the abscissa and ordinate of the sample feature point labeled n, where n is equal to the number of sample feature points.
The target detection network used in this embodiment may be an improvement on the existing yolo target detection network, or may be an improvement on other target detection networks, for example, networks such as SSD, FASTER, CNN, and the like, which is not limited herein.
With reference to fig. 7, in this embodiment, the process of identifying the to-be-processed image by the target detection network includes:
s121: after receiving the image to be processed, the target detection network generates a plurality of anchor frames on the image to be processed.
S122: and respectively predicting the offset of each anchor frame relative to the boundary frame and the offset of the corresponding feature point, and the confidence of each anchor frame relative to the boundary frame and the confidence of the corresponding feature point.
S123: and determining the boundary frame and the characteristic point according to the position of each anchor frame, the offset of each anchor frame relative to the boundary frame and the offset of the relative characteristic point, and the confidence of each anchor frame relative to the boundary frame and the confidence of the relative characteristic point.
Specifically, the anchor frame is also called a priori frame, the position and the size of the anchor frame are preset, and the anchor frame is generated on the image to be processed according to a preset rule after the target detection network receives the image to be processed.
After a plurality of anchor frames are generated, for each anchor frame, the offset of the anchor frame relative to the boundary frame is predicted, and meanwhile, the confidence of each anchor frame relative to the boundary frame is obtained. And the confidence of the anchor frame relative to the boundary frame represents, and the probability of the corresponding boundary frame existing in the anchor frame and the accuracy of the offset of the anchor frame relative to the corresponding boundary frame are obtained through prediction.
The offset of the anchor frame relative to the boundary frame can also be understood as the offset of the anchor frame relative to the detection target, and the offset of the anchor frame relative to the boundary frame substantially means the offset of the center point of the anchor frame relative to the center point of the boundary frame, namely the center point of the detection target; the confidence of the anchor frame relative to the bounding box can also be understood as the confidence of the anchor frame relative to the detection target.
Similarly, for each anchor frame, the offset of the anchor frame relative to the feature point is predicted, and the confidence of each anchor frame relative to the feature point is obtained. And the confidence degree of the anchor frame relative to the characteristic points represents, and the probability of the corresponding characteristic points existing in the anchor frame and the accuracy of the offset of the anchor frame relative to the corresponding characteristic points are obtained through prediction. Wherein the offset of the anchor frame relative to the feature point substantially refers to the offset of the center point of the anchor frame relative to the feature point.
And finally, determining the boundary frame according to the position of each anchor frame, the offset of each anchor frame relative to the boundary frame and the confidence of each anchor frame relative to the boundary frame.
The feature points can be determined according to the position of each anchor frame, the offset of each anchor frame relative to the feature points, and the confidence of each anchor frame relative to the feature points.
The specific process of obtaining the bounding box and the feature points can be seen below.
In this embodiment, the step S121 of generating a plurality of anchor frames on the image to be processed includes: dividing an image to be processed into a plurality of image blocks; and generating a plurality of anchor frames on the image to be processed by taking the central point of each image block in the plurality of image blocks as the center.
Specifically, with reference to fig. 8, after dividing the image to be processed into a plurality of image blocks (also referred to as grid cells), the central point 101 of each image block is a grid point, and then an anchor frame is established with the grid point of each image block as the central point.
As can be seen from the above analysis, the offset of the anchor frame with respect to the boundary frame substantially refers to the offset of the center point of the anchor frame with respect to the boundary frame, and the center point of the anchor frame is the center point of the image block.
Therefore, when the number of the image blocks is larger and the center points of the image blocks are denser, the more offsets of the center points of the image blocks relative to the boundary frame and the relative feature points can be predicted, so that the prediction error is reduced, that is, the more the number of the image blocks is, the lower the error of the offset of the center points of the predicted image blocks relative to the boundary frame and the error of the offset of the center points of the image blocks relative to the feature points are, and the lower the error of the center points of the image blocks relative to the boundary frame and the offset of the feature points is, the more accurate the final positioning of the boundary frame and the feature points is.
That is to say, the positioning of the bounding box and the feature points is related to the image blocks, so that the positioning accuracy of the bounding box and the feature points can be adjusted by adjusting the number of the image blocks (i.e. adjusting the size of the image blocks), thereby meeting the requirements of different application scenes and adjusting the positioning accuracy. In other embodiments, the anchor frame may be generated around a predetermined point in the image block, where the predetermined point may not be the center point of the image block.
Meanwhile, in the present embodiment, when generating the anchor frame, at least two anchor frames with different scales are generated respectively with the central point 101 of each image block as the center, that is, at least two anchor frames with different scales are established with the central point 101 of each image block.
Wherein different dimensions means different areas of the anchor frame and/or different aspect ratios of the anchor frame. For example, in fig. 8, an anchor frame 102 and an anchor frame 103 are generated for the center point 101, respectively.
Specifically, the identification of the boundary frame and the feature point is actually the identification of the domain image, if the domain image is too small, the boundary frame and the feature point cannot be completely contained, so that the boundary frame and the feature point cannot be identified, and the domain image is too large, and the features except the boundary frame and the feature point can be covered, so that the identification speed can be reduced during the identification.
Wherein, when generating a plurality of anchor frames with the central point 101 of each image block of the plurality of image blocks as the center, the process of determining the bounding box comprises: determining at least two target detection frames with different scales corresponding to the same detection target according to the position of each anchor frame, the offset of each anchor frame relative to the boundary frame and the confidence of each anchor frame relative to the boundary frame, and then determining the target detection frame with the highest corresponding confidence in the at least two target detection frames with different scales as the boundary frame of the detection target.
Specifically, when only one anchor frame of the same scale is generated with the center point 101 of each image block as a center, the boundary frame is predicted for each anchor frame to obtain an offset of each anchor frame with respect to the boundary frame and a confidence of each anchor frame with respect to the boundary frame, and then a plurality of target detection frames with the same size and aspect ratio and corresponding to the same detection target and confidences of the plurality of target detection frames can be obtained according to the position of each anchor frame and the offset of each anchor frame with respect to the boundary frame, where the confidence of the target detection frame is the confidence of the corresponding anchor frame with respect to the corresponding detection target.
After obtaining a plurality of target detection frames corresponding to the same detection target, processing the plurality of target detection frames by using a Non Maximum Suppression (NMS) algorithm, and finally only keeping one target detection frame corresponding to the same detection target.
And when the central point 101 of each image block is taken as the center to generate the anchor frames with at least two different scales, the steps are respectively and repeatedly executed aiming at the anchor frame with each scale, so that the target detection frames with at least two different scales are obtained corresponding to the same detection target, and then the target detection frame with the highest corresponding confidence coefficient is reserved in the target detection frames with the two different scales, so that the boundary frame of the detection target is obtained.
Similarly, the process of determining the feature points includes: and then determining the central point of the feature point detection frame with the highest corresponding confidence coefficient in the at least two feature point detection frames with different scales corresponding to the same feature point as the feature point.
The process of the feature point is similar to the determination process of the bounding box, and the main difference is that: after obtaining at least two feature point detection frames with different scales corresponding to the same feature point, finding out the feature point detection frame with the highest corresponding confidence coefficient, and then determining the central point of the feature point detection frame as the feature point.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of the object detection device of the present application. The target detection apparatus 200 includes a processor 210, a memory 220, and a communication circuit 230, wherein the processor 210 is coupled to the memory 220 and the communication circuit 230, respectively, the memory 220 stores program data, and the processor 210 implements the steps in the method according to any of the above embodiments by executing the program data in the memory 220, and the detailed steps can be referred to the above embodiments and are not described herein again.
The target detection device 200 may be any device with image processing capability, such as a computer and a mobile phone, and is not limited herein.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application. The computer-readable storage medium 300 stores a computer program 310, the computer program 310 being executable by a processor to implement the steps of any of the methods described above.
The computer-readable storage medium 300 may be a device that can store the computer program 310, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the computer program 310, and the server can send the stored computer program 310 to another device for operation, or can self-operate the stored computer program 310.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method of object detection, the method comprising:
acquiring an image to be processed;
carrying out target identification on the image to be processed to obtain a boundary frame of a detection target in the image to be processed and feature points on the detection target;
and determining the target characteristics of the detection target according to the boundary box and the characteristic points.
2. The method according to claim 1, wherein the step of performing target recognition on the image to be processed to obtain a bounding box of a detection target in the image to be processed and a feature point on the detection target comprises:
and carrying out target identification on the image to be processed by utilizing a pre-trained target detection network to obtain the boundary frame and the feature points of the detection target.
3. The method of claim 2, further comprising, prior to said acquiring the image to be processed:
obtaining a sample image, the sample image including a sample target;
marking a sample bounding box of the sample target and sample feature points on the sample target in the sample image;
and taking the sample image as input, taking the sample boundary box and the sample characteristic points of the sample target as marking information, and training the target detection network.
4. The method according to claim 2, wherein the step of performing target recognition on the image to be processed by using a pre-trained target detection network to obtain the bounding box and the feature points of the detection target comprises:
after receiving the image to be processed, the target detection network generates a plurality of anchor frames on the image to be processed;
respectively predicting the offset of each anchor frame relative to the boundary frame and the offset relative to the feature point, and the confidence of each anchor frame relative to the boundary frame and the confidence of each anchor frame relative to the feature point;
and determining the boundary frame and the feature point according to the position of each anchor frame, the offset of each anchor frame relative to the boundary frame and the offset relative to the feature point, and the confidence of each anchor frame relative to the boundary frame and the confidence relative to the feature point.
5. The method of claim 4, wherein the step of generating a plurality of anchor frames on the image to be processed comprises:
dividing the image to be processed into a plurality of image blocks;
and generating a plurality of anchor frames on the image to be processed by taking the central point of each image block in the plurality of image blocks as the center.
6. The method according to claim 5, wherein the step of generating a plurality of anchor frames on the image to be processed centering on a central point of each of the plurality of image blocks comprises:
and respectively generating the anchor frames with at least two different scales on the image to be processed by taking the central point of each image block as a center.
7. The method of claim 6, wherein the step of determining the bounding box and the feature points based on the position of each of the anchor boxes, the offset of each of the anchor boxes relative to the bounding box and the offset relative to the feature points, and the confidence of each of the anchor boxes relative to the bounding box and the confidence relative to the feature points comprises:
determining at least two target detection frames with different scales corresponding to the same detection target according to the position of each anchor frame, the offset of each anchor frame relative to the boundary frame and the confidence of each anchor frame relative to the boundary frame;
according to the position of each anchor frame, the offset of each anchor frame relative to the feature point and the confidence of each anchor frame relative to the feature point, corresponding to feature point detection frames with at least two different scales of the same feature point;
determining a target detection frame with the highest corresponding confidence coefficient as the boundary frame of the detection target in at least two target detection frames with different scales corresponding to the same detection target;
and determining the central point of the characteristic point detection frame with the highest corresponding confidence coefficient as the characteristic point in at least two characteristic point detection frames with different scales corresponding to the same characteristic point.
8. The method according to claim 1, wherein the step of determining the target feature of the detection target according to the bounding box and the feature point comprises:
determining the position characteristics of the detection target according to the boundary box;
and determining the angle characteristic of the detection target according to the characteristic points.
9. An object detection device, comprising a processor, a memory and a communication circuit, wherein the processor is respectively coupled to the memory and the communication circuit, the memory stores program data, and the processor executes the program data in the memory to realize the steps of the method according to any one of claims 1-8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executable by a processor to implement the steps in the method according to any one of claims 1-8.
CN202111511730.4A 2021-12-06 2021-12-06 Target detection method, device and computer readable storage medium Pending CN114219991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111511730.4A CN114219991A (en) 2021-12-06 2021-12-06 Target detection method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111511730.4A CN114219991A (en) 2021-12-06 2021-12-06 Target detection method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114219991A true CN114219991A (en) 2022-03-22

Family

ID=80701007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111511730.4A Pending CN114219991A (en) 2021-12-06 2021-12-06 Target detection method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114219991A (en)

Similar Documents

Publication Publication Date Title
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN106971401B (en) Multi-target tracking device and method
CN111512317A (en) Multi-target real-time tracking method and device and electronic equipment
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN105678689A (en) High-precision map data registration relationship determination method and device
CN110298281B (en) Video structuring method and device, electronic equipment and storage medium
CN114677554A (en) Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort
EP4102458A1 (en) Method and apparatus for identifying scene contour, and computer-readable medium and electronic device
WO2021134285A1 (en) Image tracking processing method and apparatus, and computer device and storage medium
CN111354022B (en) Target Tracking Method and System Based on Kernel Correlation Filtering
CN112257605A (en) Three-dimensional target detection method, system and device based on self-labeling training sample
CN111310800A (en) Image classification model generation method and device, computer equipment and storage medium
CN115810133B (en) Welding control method based on image processing and point cloud processing and related equipment
CN111814905A (en) Target detection method, target detection device, computer equipment and storage medium
CN114359665A (en) Training method and device of full-task face recognition model and face recognition method
CN110276801B (en) Object positioning method and device and storage medium
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
US20210304411A1 (en) Map construction method, apparatus, storage medium and electronic device
CN114219936A (en) Object detection method, electronic device, storage medium, and computer program product
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
CN110222704B (en) Weak supervision target detection method and device
CN114219991A (en) Target detection method, device and computer readable storage medium
CN112668365A (en) Material warehousing identification method, device, equipment and storage medium
CN115937950A (en) Multi-angle face data acquisition method, device, equipment and storage medium
CN113808142A (en) Ground identifier identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination