CN112837384A - Vehicle marking method and device and electronic equipment - Google Patents

Vehicle marking method and device and electronic equipment Download PDF

Info

Publication number
CN112837384A
CN112837384A CN202110227154.4A CN202110227154A CN112837384A CN 112837384 A CN112837384 A CN 112837384A CN 202110227154 A CN202110227154 A CN 202110227154A CN 112837384 A CN112837384 A CN 112837384A
Authority
CN
China
Prior art keywords
vehicle
point cloud
cloud data
coordinate system
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110227154.4A
Other languages
Chinese (zh)
Other versions
CN112837384B (en
Inventor
张广晟
田欢
胡骏
于红绯
刘威
袁淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202110227154.4A priority Critical patent/CN112837384B/en
Publication of CN112837384A publication Critical patent/CN112837384A/en
Application granted granted Critical
Publication of CN112837384B publication Critical patent/CN112837384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle marking method, a vehicle marking device and electronic equipment, wherein the vehicle marking method comprises the following steps: acquiring vehicle point cloud data obtained by detecting vehicles in a target environment by using a laser radar and a vehicle image obtained by shooting the vehicles in the target environment by using a camera; determining a point cloud picture corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle example segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system; and determining a marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system. The method can automatically mark the true value of each vehicle, improves the marking efficiency, has high intelligent degree, reduces the manual workload and lowers the marking cost.

Description

Vehicle marking method and device and electronic equipment
Technical Field
The invention relates to the technical field of image recognition, in particular to a vehicle marking method and device and electronic equipment.
Background
In the field of automatic driving, deep learning is widely applied. In the deep learning process, the neural network needs to be trained, and a training sample data set needs to be used during training.
Currently, when a training sample data set is obtained, a vehicle image is generally collected, and then the collected vehicle image is marked (for example, a position coordinate of a vehicle, a 2D frame of the vehicle, a feature point of the vehicle, and the like) in a manual manner according to a specific training task, so as to obtain a true marking value.
The vehicle marking method has the technical problems of low efficiency, poor intelligence degree and high cost.
Disclosure of Invention
In view of the above, the present invention provides a vehicle marking method, a vehicle marking device and an electronic device, so as to solve the technical problems of low efficiency, poor intelligence and high cost of the existing vehicle marking method.
In a first aspect, an embodiment of the present invention provides a vehicle marking method, including:
acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera;
determining a point cloud map corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image;
determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system;
and determining a marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
Further, determining a point cloud map corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image, including:
performing coordinate conversion on the vehicle point cloud data to obtain a point cloud picture corresponding to the pixels of the vehicle image;
and carrying out example segmentation on the vehicle image to obtain a vehicle example segmentation result.
Further, the coordinate conversion is performed on the vehicle point cloud data, and the coordinate conversion comprises:
and converting the vehicle point cloud data into a camera coordinate system to obtain a point cloud picture corresponding to the pixels of the vehicle image.
Further, the example segmentation is performed on the vehicle image, and comprises:
and carrying out instance segmentation on the vehicle image by adopting a vehicle instance segmentation model to obtain a vehicle instance segmentation result.
Further, determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud map and the vehicle instance segmentation result comprises:
and comparing the point cloud picture with the vehicle example segmentation result to obtain point cloud data of each vehicle in the vehicle image under a camera coordinate system.
Further, the true value includes at least one of: the method comprises the following steps that coordinate truth values of a 3D model of a vehicle in the world coordinate system, coordinate truth values of a 3D frame surrounding the vehicle in the world coordinate system and physical attribute truth values of the vehicle are determined, and mark truth values of each vehicle in a vehicle image are determined based on point cloud data of each vehicle in the world coordinate system, and the method comprises the following steps:
clustering point cloud data of each vehicle under a world coordinate system;
and obtaining a mark truth value of each vehicle in the vehicle image according to the clustering result.
Further, the physical property truth value includes at least one of: the length of the vehicle, the width of the vehicle, and the height of the vehicle.
In a second aspect, an embodiment of the present invention further provides a vehicle marking apparatus, including:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera;
a first determination unit configured to determine a point cloud image corresponding to a pixel of the vehicle image based on the vehicle point cloud data, and determine a vehicle instance segmentation result based on the vehicle image;
the second determining unit is used for determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle example segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system;
and the third determining unit is used for determining the marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing machine executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any of the first aspects.
In an embodiment of the present invention, there is provided a vehicle marking method including: acquiring vehicle point cloud data obtained by detecting vehicles in a target environment by using a laser radar and a vehicle image obtained by shooting the vehicles in the target environment by using a camera; determining a point cloud picture corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle example segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system; and determining a marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system. According to the vehicle marking method, the real marking value of each vehicle can be automatically marked, the marking efficiency is improved, the intelligent degree is high, the manual workload is reduced, the marking cost is reduced, and the technical problems of low efficiency, poor intelligent degree and high cost of the conventional vehicle marking method are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a vehicle marking method provided by an embodiment of the present invention;
FIG. 2 is a flowchart of a method for determining a point cloud image corresponding to a pixel of a vehicle image based on vehicle point cloud data and determining a vehicle instance segmentation result based on the vehicle image according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for determining a true value of each vehicle in a vehicle image based on point cloud data of each vehicle in a world coordinate system according to an embodiment of the present invention;
FIG. 4 is a schematic view of a vehicle marking apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Currently, when a training sample data set is acquired in the field of automatic driving, a vehicle image is usually marked manually, so as to obtain a true marking value. The marking method has low efficiency, poor intelligence degree and high cost.
Based on this, the embodiment provides a vehicle marking method, which can automatically mark the true marking value of each vehicle, improve marking efficiency, achieve high intelligence, reduce manual workload, and reduce marking cost.
Embodiments of the present invention are further described below with reference to the accompanying drawings.
The first embodiment is as follows:
in accordance with an embodiment of the present invention, there is provided an embodiment of a vehicle marking method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system, such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than presented herein.
Fig. 1 is a flow chart of a vehicle marking method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera are obtained;
in the embodiment of the invention, when the vehicle point cloud data and the vehicle image are acquired, the vehicle point cloud data and the vehicle image can be acquired by installing a front-view camera on a front windshield of the data acquisition vehicle and installing a laser radar on a front bumper of the data acquisition vehicle/the top of the data acquisition vehicle. The vehicle point cloud data is actually point cloud data of the vehicle under a laser radar coordinate system.
Generally, the detection range of the camera is smaller than that of the laser radar, that is, all information contained in the vehicle image shot by the camera is reflected in vehicle point cloud data detected by the laser radar; or vehicle data in the intersection of the fields of vision of the laser radar and the camera is taken, and specifically, vehicle point cloud data obtained by detecting the laser radar is converted into a camera coordinate system, so that the vehicle data in the intersection of the fields of vision of the laser radar and the camera can be obtained.
Step S104, determining a point cloud image corresponding to the pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image;
step S106, determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system;
and step S108, determining a mark truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
The process from step S104 to step S108 is described in detail below, and is not described herein again.
In an embodiment of the present invention, there is provided a vehicle marking method including: acquiring vehicle point cloud data obtained by detecting vehicles in a target environment by using a laser radar and a vehicle image obtained by shooting the vehicles in the target environment by using a camera; determining a point cloud picture corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle example segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system; and determining a marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system. According to the vehicle marking method, the real marking value of each vehicle can be automatically marked, the marking efficiency is improved, the intelligent degree is high, the manual workload is reduced, the marking cost is reduced, and the technical problems of low efficiency, poor intelligent degree and high cost of the conventional vehicle marking method are solved.
The vehicle marking method of the present invention is briefly described above, and the details thereof are described in detail below.
In an alternative embodiment of the present invention, referring to fig. 2, the step S104 of determining a point cloud image corresponding to a pixel of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image includes the following steps:
step S201, coordinate conversion is carried out on the vehicle point cloud data to obtain a point cloud picture corresponding to pixels of the vehicle image;
specifically, the vehicle point cloud data is converted into a camera coordinate system, and a point cloud image corresponding to pixels of the vehicle image is obtained.
And step S202, carrying out example segmentation on the vehicle image to obtain a vehicle example segmentation result.
Specifically, a vehicle instance segmentation model is adopted to perform instance segmentation on the vehicle image to obtain a vehicle instance segmentation result.
In an optional embodiment of the present invention, the step S106 of determining point cloud data of each vehicle in the vehicle image under the camera coordinate system based on the point cloud map and the vehicle instance segmentation result specifically includes: and comparing the point cloud picture with the vehicle example segmentation result to obtain point cloud data of each vehicle in the vehicle image under the camera coordinate system.
In an alternative embodiment of the invention, the marking of the true value comprises at least one of: referring to fig. 3, the step S108 of determining the true value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system includes the following steps:
step S301, clustering point cloud data of each vehicle in a world coordinate system;
specifically, the physical property true value includes at least one of: the length of the vehicle, the width of the vehicle, and the height of the vehicle.
And step S302, obtaining a mark truth value of each vehicle in the vehicle image according to the clustering result.
Specifically, a true coordinate value of the 3D model of the vehicle in the world coordinate system can be obtained according to the clustering result, and in addition, a true coordinate value and a true physical attribute value of the vehicle in the world coordinate system of the 3D frame surrounding the vehicle can be obtained by taking the boundary value after the clustering result is obtained.
The vehicle marking method of the present invention may be to perform real-time processing on the vehicle point cloud data detected by the laser radar and the vehicle image captured by the camera, or may perform post-processing on the vehicle point cloud data detected by the laser radar and the vehicle image captured by the camera, so as to obtain a true marking value of each vehicle in the vehicle image.
The vehicle marking method can automatically mark true values in large batch, improves marking efficiency, has high intelligence degree, reduces manual workload and lowers marking cost.
Example two:
the embodiment of the invention also provides a vehicle marking device, which is mainly used for executing the vehicle marking method provided by the embodiment of the invention, and the vehicle marking device provided by the embodiment of the invention is specifically described below.
Fig. 4 is a schematic view of a vehicle marking apparatus according to an embodiment of the present invention, as shown in fig. 4, which mainly includes: an obtaining unit 10, a first determining unit 20, a second determining unit 30 and a third determining unit 40, wherein:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera;
a first determination unit configured to determine a point cloud image corresponding to a pixel of the vehicle image based on the vehicle point cloud data, and determine a vehicle instance segmentation result based on the vehicle image;
the second determining unit is used for determining point cloud data of each vehicle in the vehicle image under the camera coordinate system based on the point cloud image and the vehicle example segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system;
and the third determining unit is used for determining the marking true value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
In an embodiment of the present invention, there is provided a vehicle marking apparatus including: acquiring vehicle point cloud data obtained by detecting vehicles in a target environment by using a laser radar and a vehicle image obtained by shooting the vehicles in the target environment by using a camera; determining a point cloud picture corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle example segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system; and determining a marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system. According to the vehicle marking device, the real marking value of each vehicle can be automatically marked, the marking efficiency is improved, the intelligent degree is high, the manual workload is reduced, the marking cost is reduced, and the technical problems of low efficiency, poor intelligent degree and high cost of the conventional vehicle marking method are solved.
Optionally, the first determining unit is further configured to: carrying out coordinate conversion on the vehicle point cloud data to obtain a point cloud picture corresponding to pixels of the vehicle image; and carrying out example segmentation on the vehicle image to obtain a vehicle example segmentation result.
Optionally, the first determining unit is further configured to: and converting the vehicle point cloud data into a camera coordinate system to obtain a point cloud picture corresponding to the pixels of the vehicle image.
Optionally, the first determining unit is further configured to: and carrying out instance segmentation on the vehicle image by adopting a vehicle instance segmentation model to obtain a vehicle instance segmentation result.
Optionally, the second determining unit is further configured to: and comparing the point cloud picture with the vehicle example segmentation result to obtain point cloud data of each vehicle in the vehicle image under the camera coordinate system.
Optionally, the marking of true values comprises at least one of: the third determining unit is further configured to: clustering point cloud data of each vehicle in a world coordinate system; and obtaining a marking truth value of each vehicle in the vehicle image according to the clustering result.
Optionally, the physical property truth value comprises at least one of: the length of the vehicle, the width of the vehicle, and the height of the vehicle.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
As shown in fig. 5, an electronic device 600 provided in an embodiment of the present application includes: a processor 601, a memory 602 and a bus, wherein the memory 602 stores machine-readable instructions executable by the processor 601, when the electronic device is operated, the processor 601 and the memory 602 communicate with each other through the bus, and the processor 601 executes the machine-readable instructions to execute the steps of the vehicle marking method.
Specifically, the memory 602 and the processor 601 can be general-purpose memory and processor, which are not limited in particular, and the vehicle marking method can be performed when the processor 601 runs a computer program stored in the memory 602.
The processor 601 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 601. The Processor 601 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 602, and the processor 601 reads the information in the memory 602 and completes the steps of the method in combination with the hardware thereof.
Corresponding to the vehicle marking method, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores machine executable instructions, and when the computer executable instructions are called and executed by a processor, the computer executable instructions cause the processor to execute the steps of the vehicle marking method.
The vehicle marking device provided by the embodiment of the application can be specific hardware on the equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
For another example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the vehicle marking method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the scope of the embodiments of the present application. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A vehicle marking method, comprising:
acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera;
determining a point cloud map corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image;
determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system;
and determining a marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
2. The method of claim 1, wherein determining a point cloud map corresponding to pixels of the vehicle image based on the vehicle point cloud data and determining a vehicle instance segmentation result based on the vehicle image comprises:
performing coordinate conversion on the vehicle point cloud data to obtain a point cloud picture corresponding to the pixels of the vehicle image;
and carrying out example segmentation on the vehicle image to obtain a vehicle example segmentation result.
3. The method of claim 2, wherein coordinate transforming the vehicle point cloud data comprises:
and converting the vehicle point cloud data into a camera coordinate system to obtain a point cloud picture corresponding to the pixels of the vehicle image.
4. The method of claim 2, wherein performing instance segmentation on the vehicle image comprises:
and carrying out instance segmentation on the vehicle image by adopting a vehicle instance segmentation model to obtain a vehicle instance segmentation result.
5. The method of claim 1, wherein determining point cloud data for each vehicle in the vehicle image under a camera coordinate system based on the point cloud map and the vehicle instance segmentation results comprises:
and comparing the point cloud picture with the vehicle example segmentation result to obtain point cloud data of each vehicle in the vehicle image under a camera coordinate system.
6. The method of claim 1, wherein the marking of true values comprises at least one of: the method comprises the following steps that coordinate truth values of a 3D model of a vehicle in the world coordinate system, coordinate truth values of a 3D frame surrounding the vehicle in the world coordinate system and physical attribute truth values of the vehicle are determined, and mark truth values of each vehicle in a vehicle image are determined based on point cloud data of each vehicle in the world coordinate system, and the method comprises the following steps:
clustering point cloud data of each vehicle under a world coordinate system;
and obtaining a mark truth value of each vehicle in the vehicle image according to the clustering result.
7. The method of claim 6, wherein the physical property truth values comprise at least one of: the length of the vehicle, the width of the vehicle, and the height of the vehicle.
8. A vehicle marking device, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera;
a first determination unit configured to determine a point cloud image corresponding to a pixel of the vehicle image based on the vehicle point cloud data, and determine a vehicle instance segmentation result based on the vehicle image;
the second determining unit is used for determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle example segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under a world coordinate system;
and the third determining unit is used for determining the marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of the preceding claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to execute the method of any of claims 1 to 7.
CN202110227154.4A 2021-03-01 2021-03-01 Vehicle marking method and device and electronic equipment Active CN112837384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110227154.4A CN112837384B (en) 2021-03-01 2021-03-01 Vehicle marking method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110227154.4A CN112837384B (en) 2021-03-01 2021-03-01 Vehicle marking method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112837384A true CN112837384A (en) 2021-05-25
CN112837384B CN112837384B (en) 2024-07-19

Family

ID=75934303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110227154.4A Active CN112837384B (en) 2021-03-01 2021-03-01 Vehicle marking method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112837384B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113281770A (en) * 2021-05-28 2021-08-20 东软睿驰汽车技术(沈阳)有限公司 Coordinate system relation obtaining method and device
CN113780214A (en) * 2021-09-16 2021-12-10 上海西井信息科技有限公司 Method, system, device and storage medium for image recognition based on crowd

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110988912A (en) * 2019-12-06 2020-04-10 中国科学院自动化研究所 Road target and distance detection method, system and device for automatic driving vehicle
CN111222417A (en) * 2019-12-24 2020-06-02 武汉中海庭数据技术有限公司 Method and device for improving lane line extraction precision based on vehicle-mounted image
CN111340797A (en) * 2020-03-10 2020-06-26 山东大学 Laser radar and binocular camera data fusion detection method and system
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle
CN112037159A (en) * 2020-07-29 2020-12-04 长安大学 Cross-camera road space fusion and vehicle target detection tracking method and system
CN112230204A (en) * 2020-10-27 2021-01-15 深兰人工智能(深圳)有限公司 Combined calibration method and device for laser radar and camera
CN112348902A (en) * 2020-12-03 2021-02-09 苏州挚途科技有限公司 Method, device and system for calibrating installation deviation angle of road end camera
CN112396650A (en) * 2020-03-30 2021-02-23 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110988912A (en) * 2019-12-06 2020-04-10 中国科学院自动化研究所 Road target and distance detection method, system and device for automatic driving vehicle
CN111222417A (en) * 2019-12-24 2020-06-02 武汉中海庭数据技术有限公司 Method and device for improving lane line extraction precision based on vehicle-mounted image
CN111340797A (en) * 2020-03-10 2020-06-26 山东大学 Laser radar and binocular camera data fusion detection method and system
CN112396650A (en) * 2020-03-30 2021-02-23 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar
CN112037159A (en) * 2020-07-29 2020-12-04 长安大学 Cross-camera road space fusion and vehicle target detection tracking method and system
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle
CN112230204A (en) * 2020-10-27 2021-01-15 深兰人工智能(深圳)有限公司 Combined calibration method and device for laser radar and camera
CN112348902A (en) * 2020-12-03 2021-02-09 苏州挚途科技有限公司 Method, device and system for calibrating installation deviation angle of road end camera

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113281770A (en) * 2021-05-28 2021-08-20 东软睿驰汽车技术(沈阳)有限公司 Coordinate system relation obtaining method and device
CN113780214A (en) * 2021-09-16 2021-12-10 上海西井信息科技有限公司 Method, system, device and storage medium for image recognition based on crowd
CN113780214B (en) * 2021-09-16 2024-04-19 上海西井科技股份有限公司 Method, system, equipment and storage medium for image recognition based on crowd

Also Published As

Publication number Publication date
CN112837384B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
EP3171292B1 (en) Driving lane data processing method, device, storage medium and apparatus
CN110472580B (en) Method, device and storage medium for detecting parking stall based on panoramic image
Küçükmanisa et al. Real-time illumination and shadow invariant lane detection on mobile platform
CN111382625A (en) Road sign identification method and device and electronic equipment
CN112837384A (en) Vehicle marking method and device and electronic equipment
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN111950504A (en) Vehicle detection method and device and electronic equipment
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN114758268A (en) Gesture recognition method and device and intelligent equipment
CN113569812A (en) Unknown obstacle identification method and device and electronic equipment
CN112767412B (en) Vehicle part classification method and device and electronic equipment
CN114119695A (en) Image annotation method and device and electronic equipment
CN109523570B (en) Motion parameter calculation method and device
CN113542868A (en) Video key frame selection method and device, electronic equipment and storage medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN116863458A (en) License plate recognition method, device, system and storage medium
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN115713750A (en) Lane line detection method and device, electronic equipment and storage medium
CN115115530B (en) Image deblurring method, device, terminal equipment and medium
CN111950501B (en) Obstacle detection method and device and electronic equipment
CN116259021A (en) Lane line detection method, storage medium and electronic equipment
CN111950490B (en) Parking rod identification method and training method and device of identification model thereof
CN113642521A (en) Traffic light identification quality evaluation method and device and electronic equipment
CN113591543A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN113066100A (en) Target tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant