CN115760974A - Detection method and device for cut-off target - Google Patents

Detection method and device for cut-off target Download PDF

Info

Publication number
CN115760974A
CN115760974A CN202211370980.5A CN202211370980A CN115760974A CN 115760974 A CN115760974 A CN 115760974A CN 202211370980 A CN202211370980 A CN 202211370980A CN 115760974 A CN115760974 A CN 115760974A
Authority
CN
China
Prior art keywords
truncated
target
feature
region
truncation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211370980.5A
Other languages
Chinese (zh)
Inventor
陆强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Network Technology Shanghai Co Ltd
Original Assignee
International Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Network Technology Shanghai Co Ltd filed Critical International Network Technology Shanghai Co Ltd
Priority to CN202211370980.5A priority Critical patent/CN115760974A/en
Publication of CN115760974A publication Critical patent/CN115760974A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a detection method and a device for a truncation target, comprising the following steps: acquiring images of a plurality of perspectives for a truncated target; extracting 2D features of the truncated target based on the image; and determining a 3D coordinate of the 2D feature in a 3D coordinate system, transforming the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature. The invention can improve the detection effect of the truncation target.

Description

Detection method and device for cut-off target
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for detecting a truncated target.
Background
When relying on a camera for 3D object detection (e.g., monocular 3D detection), object truncation is often encountered: such as by a large truck, only a portion of which is in the image. The existing method improves the data ratio difference between the truncated target and the non-truncated target by increasing the number of samples of the truncated target and other means, thereby improving the effect of the detection model. Although the above method can improve the detection effect of the cut-off target to some extent, the upper limit of the effect is limited by the upper limit of the detection model itself.
Disclosure of Invention
The invention provides a method and a device for detecting a cut-off target, which are used for solving the problems.
The invention provides a detection method of a truncation target, which comprises the following steps: acquiring images of a plurality of perspectives for a truncated target; extracting 2D features of the truncated target based on the image; and determining a 3D coordinate of the 2D feature in a 3D coordinate system, transforming the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature.
Further, the determining 3D coordinates of the 2D feature in a 3D coordinate system includes: inputting the 2D features into a depth prediction network to obtain corresponding depth dimension information; determining 2D coordinates of each pixel in the 2D feature in an image coordinate system; and determining a 3D coordinate of the 2D feature in a 3D coordinate system according to the 2D coordinate, a preset conversion matrix and the corresponding depth dimension information.
Further, the prediction result of the truncated target comprises a central point of a truncated region of the truncated target, a deviation after the central point coordinates of an un-truncated region of the truncated target are rounded, a deviation from the central point of a complete region of the truncated target to a target central point of the complete region of the target, a three-dimensional size of the truncated target and a truncation attribute of the truncated target; wherein the complete region of the truncated target comprises a truncated region and an un-truncated region of the target; the truncation attribute of the truncation target represents a ratio of an area of a truncated region of the truncation target to an area of a full region of the truncation target.
Further, the method further comprises: and determining the final central point of the truncated target according to the central point of the truncated region of the truncated target, the deviation of the truncated target after the central point coordinates of the non-truncated region are rounded and the deviation from the central point of the complete region of the truncated target to the target central point of the complete region of the truncated target.
Further, obtaining a prediction result of the truncated target according to the 3D feature includes:
and training according to at least one loss function to obtain a model for predicting the prediction result of the truncated target, and predicting through the model to obtain the prediction result of the truncated target.
Further, the at least one loss function includes at least one of a loss function for handling sample classification imbalances, a regression loss function.
The present invention also provides a detection apparatus for a truncated target, comprising: an image acquisition module for acquiring images of a plurality of perspectives for a truncated target; a 2D feature extraction module for extracting 2D features of the truncated target based on the image; and the truncation target prediction module is used for determining a 3D coordinate of the 2D feature in a 3D coordinate system, converting the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the detection method of the truncation target.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of detecting a truncated object as described in any of the above.
The method and the device for detecting the truncated target improve the detection effect of the truncated target from two angles of input and model design. Compared with monocular 3D detection, multi-view image input can acquire the characteristics of a truncated target from multiple angles, and more target information is acquired. Compared with the 2D image visual angle detection of a 3D target, the target frame needs to be converted into the BEV visual angle through complex post-processing so as to be applied (planning control) by downstream, the method and the device directly detect the target under the BEV (bird's eye view) visual angle, and the conversion process is avoided. The design aiming at the truncation target is additionally added on the output, and the detection effect of the truncation target is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting a truncated object according to the present invention;
FIG. 2 is a schematic diagram of one embodiment of a method for detecting a truncated object provided by the present invention;
FIG. 3 is a schematic representation of one embodiment of a two-dimensional coordinate transformation provided by the present invention in three-dimensional coordinates;
FIG. 4 is a schematic structural diagram of a device for detecting a truncated object according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for detecting a truncated object according to some embodiments of the present invention. As shown in fig. 1, the method comprises the steps of:
step 101, acquiring images of a plurality of viewing angles for a truncation target.
In this step, 2D images are acquired from multiple views of the truncated object, and the multi-view images can reflect more information of the truncated object. And splicing after obtaining a plurality of 2D images to form a complete image corresponding to the truncation target.
In the specific image acquisition process, the truncated target can be shot under a plurality of set visual angles through the camera, and the set visual angles can improve the target detection accuracy of the stage target.
In some embodiments, after the images from multiple viewing angles are acquired, the images from multiple viewing angles may be post-processed, for example, determining the blur degree of each image, performing subsequent 2D feature extraction on the image with smaller blur degree, and discarding the image with larger blur degree.
Step 102, extracting 2D features of the truncated object based on the image.
In this step, the complete image is input into an existing neural network, and 2D feature extraction is performed through the neural network.
In some embodiments, after the 2D features are extracted, feature optimization may be performed, so as to determine stable 2D features that can better reflect the differentiation of the truncation targets, and the 2D features are used for subsequent 3D features.
And 103, determining a 3D coordinate of the 2D feature in a 3D coordinate system, converting the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature.
In some optional implementations, the determining 3D coordinates of the 2D feature in a 3D coordinate system includes: inputting the 2D features into a depth prediction network to obtain corresponding depth dimension information; determining 2D coordinates of each pixel in the 2D feature in an image coordinate system; and determining the 3D coordinate of the 2D feature in a 3D coordinate system according to the 2D coordinate, a preset transformation matrix and the corresponding depth dimension information.
As an example, the 2D feature image is input into a depth prediction network, depth dimension information D corresponding to each pixel of the 2D feature image is obtained, and 2D coordinates of each pixel in the 2D feature in an image coordinate system are determined as (x, y). And determining the 3D coordinate of the 2D feature in the 3D coordinate system according to the 2D coordinate, a preset transformation matrix and the corresponding depth dimension information, namely obtaining the 3D coordinate of the 2D feature in the 3D coordinate system through (x, y, D) W. Where W represents the transformation matrix. In this embodiment, the depth prediction network is composed of a plurality of convolution blocks, and in other embodiments of the present invention, the depth prediction network may be other common neural networks for performing depth dimension information prediction based on 2D features.
In some optional implementations, the prediction result of the truncated target includes a central point of a truncated region of the truncated target, a deviation after a central point coordinate of an un-truncated region of the truncated target is rounded, a deviation from a central point of a complete region of the truncated target to a target central point of the complete region of the target, a three-dimensional size of the truncated target, and a truncation attribute of the truncated target; wherein the complete region of the truncated target comprises a truncated region and an un-truncated region of the target; wherein the truncation attribute of the truncation target represents a ratio of an area of a truncated region of the truncation target to an area of a full region of the truncation target.
In some optional implementations, the predicting a prediction result of the truncation target according to the 3D feature includes: and training according to at least one loss function to obtain a model for predicting the prediction result of the truncated target, and predicting through the model to obtain the prediction result of the truncated target.
Further, the at least one Loss function includes at least one of a Loss function (e.g., GHM-C Loss, focal Loss, etc.), a regression Loss function (e.g., L1 Loss, L2 Loss, or Smooth L1 Loss, etc.) for handling sample classification imbalances. A suitable loss function may also be selected depending on the specific application scenario.
The detection method of the truncated target disclosed by some embodiments of the invention improves the detection effect of the truncated target from two aspects of input and model design. Compared with monocular 3D detection, multi-view image input can acquire the characteristics of a truncated target from multiple angles, and more target information is acquired. Compared with the 3D target detection by the 2D image view angle, the target frame needs to be converted into the BEV view angle through complex post-processing so as to be applied (planning control) downstream, and the target detection method directly detects the target under the BEV (bird's eye view) view angle, so that the conversion process is avoided. The design aiming at the truncation target is additionally added on the output, and the detection effect of the truncation target is improved.
In some optional implementations, the method further comprises: and determining the final central point of the truncated target according to the central point of the truncated region of the truncated target, the deviation of the central point coordinates of the non-truncated region of the truncated target after being rounded and the deviation from the central point of the complete region of the truncated target to the central point of the target of the complete region of the truncated target.
Specifically, the coordinate of the center point of the truncated region of the truncated target is denoted as c0, the offset after the center point coordinate of the non-truncated region of the truncated target is rounded is denoted as f0, and the offset from the center point of the complete region of the truncated target to the target center point of the complete region of the truncated target is denoted as f1, then the final center point coordinate is: c = c0+ f0+ f1.
Referring to fig. 2:
1. a multi-view image is prepared.
Images shot by cameras (number > = 2) at different angles on the vehicle are sent to a network together.
2. Performing feature extraction on the multi-view image through a backbone network (backbone) to obtain 2D features. The main network is an existing neural network, is composed of a plurality of convolution blocks, and is used for performing 2D feature extraction on the 2D image.
3. The 2D features are subjected to view transformation (view transform), i.e., the 2D features are transformed from 2D coordinates to 3D coordinates, thereby obtaining 3D features.
Specifically, the features in the img-view (image coordinate system, i.e. 2D coordinates) coordinate system are converted to the features in the 3D coordinate system, and the conversion method refers to fig. 3:
the method comprises the steps of inputting img-view features (2D features in an image coordinate system) into a depth prediction network DepthNet (composed of a plurality of convolution blocks), predicting a depth range (the depth range is 1-60 and the interval is 1) of each pixel of the image features (namely the 2D features) by the DepthNet to obtain depth dimension information corresponding to the 2D features, further determining coordinate values of the image features in a 3-dimensional coordinate after the depth dimension information exists, and then converting the coordinate system of the features according to the image coordinate system and a preset conversion matrix. In this embodiment, the preset transformation matrix is a transformation matrix of a 3D coordinate system of the vehicle body, and in other embodiments of the present invention, the preset transformation matrix may be adjusted according to an actual application scenario.
4. And the Detection head (Detection head layer) outputs the result according to the 3D characteristics.
Specifically, the output of the Detection head layer is a center point (denoted by hm) of a truncated region of the truncated target, a deviation (denoted by reg) of the coordinates of the center point of an un-truncated region of the truncated target after being rounded, a deviation (denoted by reg _ virtual) from the center point of a complete region of the truncated target to a target center point of the complete region of the truncated target, a three-dimensional size (denoted by dim) of the truncated target, and a truncation attribute (denoted by truncate, the value of truncate is between 0 and 1) of the truncated target. In the training process, the loss functions of the center point hm of the truncated region of the truncated target, the deviation reg after the center point coordinates of the non-truncated region of the truncated target are rounded, the deviation reg _ virtual from the center point of the complete region of the truncated target to the target center point of the complete region of the truncated target, the three-dimensional size dim of the truncated target and the truncation attribute truncate of the truncated target are sequentially as follows: focal length, smooth L1 length, and focal length. In the training process of the truncation attribute truncate of the truncation target, the design concept of label is as follows: the ratio of the area of the truncated region to the area of the target complete region (between 0 and 1).
In an application scene, the cut-off target can be an automobile, the acquired image of one visual angle of the cut-off target can display the head of the automobile, and the central point of the cut-off area of the cut-off target can be the central point of the head area; the deviation after the coordinate of the central point of the non-truncated area of the truncation target is rounded can represent the difference value between the value after the coordinate of the tail area of the vehicle is rounded and the value after the coordinate of the tail area of the vehicle is not rounded; the offset from the central point of the complete area of the truncated target to the target central point of the complete area of the target can be expressed as the offset from the central point of the complete area of the vehicle (namely, the head area and the tail area) to the target central point; the three-dimensional size of the cutoff target may be expressed as the three-dimensional size of the vehicle; the cutoff attribute of the cutoff target may be expressed as a cutoff attribute of the vehicle.
Referring to fig. 4, fig. 4 is a schematic structural diagram of some embodiments of the device for detecting a truncated object according to the present invention, and as an implementation of the methods shown in the above figures, the present invention further provides some embodiments of a device for detecting a truncated object, which correspond to the embodiments of the methods shown in fig. 1, and which can be applied to various electronic devices.
As shown in fig. 4, the detection apparatus of a truncated object of some embodiments includes: an image acquisition module 401, configured to acquire images of multiple viewing angles for a truncated object; a 2D feature extraction module 402, configured to extract a 2D feature of the truncation target based on the image; a truncated object prediction module 403, configured to determine a 3D coordinate of the 2D feature in a 3D coordinate system, transform the 2D feature into a 3D feature according to the 3D coordinate, and obtain a prediction result of the truncated object according to the 3D feature.
In an optional implementation of some embodiments, the truncated target prediction module 403 is further configured to: inputting the 2D features into a depth prediction network to obtain corresponding depth dimension information; determining 2D coordinates of each pixel in the 2D features in an image coordinate system; and determining the 3D coordinate of the 2D feature in a 3D coordinate system according to the 2D coordinate, a preset transformation matrix and the corresponding depth dimension information.
In an optional implementation manner of some embodiments, the prediction result of the truncated target includes a central point of a truncated region of the truncated target, a deviation after the central point coordinates of an un-truncated region of the truncated target are rounded, an offset from the central point of a complete region of the truncated target to a target central point of the complete region of the target, a three-dimensional size of the truncated target, and a truncation attribute of the truncated target; wherein the complete region of the truncated target comprises a truncated region and an untruncated region of the target; wherein the truncation attribute of the truncation target represents a ratio of an area of a truncated region of the truncation target to an area of a full region of the truncation target.
In an optional implementation of some embodiments, the apparatus further comprises: and the central point calculation module is used for determining the final central point of the truncated target according to the central point of the truncated region of the truncated target, the deviation of the central point coordinates of the non-truncated region of the truncated target after being rounded and the deviation from the central point of the complete region of the truncated target to the central point of the target of the complete region of the truncated target.
In an optional implementation of some embodiments, the truncated target prediction module 403 is further configured to: and training according to at least one loss function to obtain a model for predicting the prediction result of the truncated target, and predicting through the model to obtain the prediction result of the truncated target.
In an optional implementation of some embodiments, the at least one loss function comprises at least one of a regression loss function, a loss function for handling sample classification imbalances.
It will be appreciated that the modules described in the apparatus correspond to the steps in the method described with reference to figure 1. Therefore, the operations, features and advantages of the methods described above are also applicable to the apparatus and the modules and units included therein, and are not described herein again.
Fig. 5 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 5: a processor (processor) 510, a communication Interface (Communications Interface) 520, a memory (memory) 530 and a communication bus 540, wherein the processor 510, the communication Interface 520 and the memory 530 communicate with each other via the communication bus 540. Processor 510 may call logic instructions in memory 530 to perform a method of detection of a truncated target, the method comprising: acquiring images of a plurality of perspectives for a truncated target; extracting 2D features of the truncated target based on the image; and determining a 3D coordinate of the 2D feature in a 3D coordinate system, transforming the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature.
In addition, the logic instructions in the memory 530 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In still another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the method for detecting a truncated object provided by the above methods, the method including: acquiring images of a plurality of perspectives for a truncated target; extracting 2D features of the truncated target based on the image; and determining a 3D coordinate of the 2D feature in a 3D coordinate system, transforming the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A method for detecting a truncated object, comprising:
acquiring images of a plurality of perspectives for a truncated target;
extracting 2D features of the truncated target based on the image;
and determining a 3D coordinate of the 2D feature in a 3D coordinate system, transforming the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature.
2. The method for detecting the truncation target of claim 1 wherein the determining the 3D coordinates of the 2D feature in a 3D coordinate system comprises:
inputting the 2D features into a depth prediction network to obtain corresponding depth dimension information;
determining 2D coordinates of each pixel in the 2D features in an image coordinate system;
and determining the 3D coordinate of the 2D feature in a 3D coordinate system according to the 2D coordinate, a preset transformation matrix and the corresponding depth dimension information.
3. The method according to claim 1, wherein the prediction result of the truncated object comprises a center point of a truncated region of the truncated object, a deviation of coordinates of a center point of an un-truncated region of the truncated object after rounding, a deviation of the center point of a complete region of the truncated object from a target center point of the complete region of the object, a three-dimensional size of the truncated object, and a truncation property of the truncated object;
wherein the complete region of the truncated target comprises a truncated region and an un-truncated region of the target;
the truncation attribute of the truncation target represents a ratio of an area of a truncated region of the truncation target to an area of a full region of the truncation target.
4. The method for detecting a truncated object according to claim 3, wherein the method further comprises:
and determining the final central point of the truncated target according to the central point of the truncated region of the truncated target, the deviation of the central point coordinates of the non-truncated region of the truncated target after being rounded and the deviation from the central point of the complete region of the truncated target to the central point of the target of the complete region of the truncated target.
5. The method for detecting the truncated object according to claim 1, wherein the obtaining the prediction result of the truncated object according to the 3D feature comprises:
and training according to at least one loss function to obtain a model for predicting the prediction result of the truncated target, and predicting through the model to obtain the prediction result of the truncated target.
6. The method according to claim 5, wherein the at least one loss function comprises at least one of a loss function for handling sample classification imbalances and a regression loss function.
7. A truncated object detection device, comprising:
an image acquisition module for acquiring images of a plurality of perspectives for a truncation target;
a 2D feature extraction module for extracting 2D features of the truncated target based on the image;
and the truncation target prediction module is used for determining a 3D coordinate of the 2D feature in a 3D coordinate system, transforming the 2D feature into a 3D feature according to the 3D coordinate, and obtaining a prediction result of the truncation target according to the 3D feature.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of detecting a truncated object according to any of claims 1 to 6.
9. A non-transitory computer-readable storage medium on which a computer program is stored, the computer program when executed by a processor implementing the method for detecting a truncated object according to any one of claims 1 to 6.
CN202211370980.5A 2022-11-03 2022-11-03 Detection method and device for cut-off target Pending CN115760974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211370980.5A CN115760974A (en) 2022-11-03 2022-11-03 Detection method and device for cut-off target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211370980.5A CN115760974A (en) 2022-11-03 2022-11-03 Detection method and device for cut-off target

Publications (1)

Publication Number Publication Date
CN115760974A true CN115760974A (en) 2023-03-07

Family

ID=85357749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211370980.5A Pending CN115760974A (en) 2022-11-03 2022-11-03 Detection method and device for cut-off target

Country Status (1)

Country Link
CN (1) CN115760974A (en)

Similar Documents

Publication Publication Date Title
JP6789402B2 (en) Method of determining the appearance of an object in an image, equipment, equipment and storage medium
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
CN113674421B (en) 3D target detection method, model training method, related device and electronic equipment
CN111105452B (en) Binocular vision-based high-low resolution fusion stereo matching method
CN113723317A (en) Reconstruction method and device of 3D face, electronic equipment and storage medium
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN114170290A (en) Image processing method and related equipment
CN117197388A (en) Live-action three-dimensional virtual reality scene construction method and system based on generation of antagonistic neural network and oblique photography
CN115909268A (en) Dynamic obstacle detection method and device
CN113570725A (en) Three-dimensional surface reconstruction method and device based on clustering, server and storage medium
CN115587943B (en) Denoising method and device for point cloud data, electronic equipment and storage medium
CN117274605A (en) Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN116091871B (en) Physical countermeasure sample generation method and device for target detection model
CN115063759B (en) Three-dimensional lane line detection method, device, vehicle and storage medium
CN115760974A (en) Detection method and device for cut-off target
CN116048763A (en) Task processing method and device based on BEV multitasking model framework
CN111369612A (en) Three-dimensional point cloud image generation method and equipment
CN116883770A (en) Training method and device of depth estimation model, electronic equipment and storage medium
CN113887289A (en) Monocular three-dimensional object detection method, device, equipment and product
CN109328373B (en) Image processing method, related device and storage medium thereof
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN118229938B (en) Color-imparting method, device, apparatus, medium and program product for point cloud model
CN113312979B (en) Image processing method and device, electronic equipment, road side equipment and cloud control platform
CN116343158B (en) Training method, device, equipment and storage medium of lane line detection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination