CN115825979A - Environment sensing method and device, electronic equipment, storage medium and vehicle - Google Patents

Environment sensing method and device, electronic equipment, storage medium and vehicle Download PDF

Info

Publication number
CN115825979A
CN115825979A CN202211466683.0A CN202211466683A CN115825979A CN 115825979 A CN115825979 A CN 115825979A CN 202211466683 A CN202211466683 A CN 202211466683A CN 115825979 A CN115825979 A CN 115825979A
Authority
CN
China
Prior art keywords
data
target object
generate
association matching
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211466683.0A
Other languages
Chinese (zh)
Inventor
艾锐
赵鹏云
陈康亮
苏梦璇
顾维灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haomo Zhixing Technology Co Ltd
Original Assignee
Haomo Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haomo Zhixing Technology Co Ltd filed Critical Haomo Zhixing Technology Co Ltd
Priority to CN202211466683.0A priority Critical patent/CN115825979A/en
Publication of CN115825979A publication Critical patent/CN115825979A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention relates to an environment perception method, an environment perception device, electronic equipment and a readable storage medium, which relate to the technical field of intelligent driving of vehicles and comprise the following steps: processing the point cloud data according to the 3D point cloud perception model to generate first target object data; processing the visual data according to the visual model to generate second target object data; performing data association matching according to the first target object data and the second target object data to generate a data association result; tracking the target object according to the data association matching result; the invention is applied to the environment perception system of the intelligent networked automobile, can achieve the purpose of accurately perceiving the environment information by combining the advantages of a laser radar system and a vision camera system, and solves the problem that the prior art is difficult to meet the requirements of the environment perception system on precision and reliability.

Description

Environment sensing method and device, electronic equipment, storage medium and vehicle
Technical Field
The application relates to the technical field of intelligent driving of vehicles, in particular to an environment sensing method and device, electronic equipment, storage medium and a vehicle.
Background
As the quantity of automobiles kept increases, traffic safety and energy consumption become more and more major factors restricting the development of the automobile industry. The intelligent networked automobile provides a solution for solving the problems. Therefore, the intelligent networked automobile is an important development direction in the future, and becomes a research hotspot of the automobile industry.
The environment perception system is a medium for interaction between the intelligent networked automobile and the external environment, and is a precondition of the intelligent networked automobile decision-making system. Therefore, the key point of breakthrough of the intelligent networked automobile lies in the construction of a high-precision, high-reliability and real-time environment perception system. At present, a vision camera and a laser radar are sensors mainly applied to an environment sensing system, wherein the laser radar can quickly and accurately acquire a spatial three-dimensional coordinate system of a target object so as to improve the accuracy of environment sensing; the visual camera can rapidly and clearly acquire the two-dimensional geometric shape and color information of the target object, can provide rich environment semantic information for the environment perception system, and further provides support for the environment perception system to understand the environment.
However, due to the limitation of the operating principle of the single-characteristic sensor, it is difficult to meet the requirements of the environmental perception system on accuracy and reliability by using only a visual camera or only a laser radar in the prior art.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides an environment sensing method, an environment sensing device, an electronic device, a storage medium and a vehicle.
According to a first aspect of embodiments herein, there is provided a method of environmental awareness, the method comprising:
processing the point cloud data according to the 3D point cloud perception model to generate first target object data;
processing the visual data according to the visual model to generate second target object data;
performing data association matching according to the first target object data and/or the second target object data to generate a data association result;
and tracking the target object according to the data association matching result.
Optionally, before the step of processing the point cloud data according to the 3D point cloud sensing model to generate the first target object data, the method further includes:
point cloud data and visual data are obtained, wherein the point cloud data are related to the data of the target object obtained by the laser radar system, and the visual data are related to the data of the target object obtained by the visual camera system.
Optionally, the performing data association matching according to the first target object data and the second target object data to generate a data association result includes:
and under the condition that the target object is detected by the laser radar system, performing data association matching on data of previous and next frames of the target object according to the first target object data to generate a data association matching result.
Optionally, the performing data association matching according to the first target object data and the second target object data to generate a data association result further includes:
and under the condition that the target object is detected by the vision camera system, performing data association matching on the data of the front frame and the rear frame of the target object according to the second target object data to generate a data association matching result.
Optionally, the performing data association matching according to the first target object data and the second target object data to generate a data association result further includes:
and under the condition that the target object is jointly detected by the laser radar system and the vision camera system, performing data association matching on the data of the target object according to the first target object data and the second target object data to generate a data association matching result.
According to a second aspect of embodiments herein, there is provided an environment-aware apparatus, the apparatus comprising:
the first target object data acquisition module is used for processing the point cloud data according to the 3D point cloud perception model to generate first target object data;
the second target object data acquisition module is used for processing the visual data according to the visual model to generate second target object data;
the data association matching module is used for performing data association matching according to the first target object data and/or the second target object data to generate a data association result;
and the target object tracking module is used for tracking the target object according to the data association matching result.
Optionally, the apparatus further comprises:
the data acquisition module is used for acquiring point cloud data and visual data, wherein the point cloud data is related to the data of the target object acquired by the laser radar system, and the visual data is related to the data of the target object acquired by the visual camera system.
Optionally, the data association matching module includes:
and the first data association matching unit is used for performing data association matching on data of frames before and after the target object according to the first target object data under the condition that the target object is detected by the laser radar system, and generating a data association matching result.
Optionally, the data association matching module further includes:
and the second data association matching unit is used for performing data association matching on the data of the front frame and the rear frame of the target object according to the second target object data under the condition that the target object is detected by the vision camera system, and generating a data association matching result.
Optionally, the data association matching module further includes:
and the third data association matching unit is used for performing data association matching on the data of the target object according to the first target object data and the second target object data under the condition that the target object is jointly detected by the laser radar system and the vision camera system, and generating a data association matching result.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the context aware method.
According to a fourth aspect of embodiments herein, there is provided a computer storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the method of context awareness.
According to a fifth aspect of the present invention there is provided a vehicle comprising the context awareness apparatus of the second aspect of the present invention.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
processing point cloud data according to a 3D point cloud sensing model to generate first target object data; processing the visual data according to the visual model to generate second target object data; performing data association matching according to the first target object data and the second target object data to generate a data association result; and tracking a target object according to the data association matching result, and performing target object data fusion on the first target object data and the second target object data under the condition that the target object is detected to be in a target area, wherein the target area is related to a common detection range of the laser radar system and the vision camera system. Through the technical scheme that the embodiment of this application provided, can be applied to the environment perception system of intelligent networking car with laser radar system and vision camera system jointly, and then can fuse the advantage of two sensor systems of laser radar system and vision camera system and reach the purpose of accurate perception environmental information, furtherly has solved prior art and has been difficult to satisfy the problem of environment perception system's precision and reliability requirement.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method of context awareness, according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of context awareness, according to an example embodiment;
FIG. 3 is a flowchart illustrating step 103 of a flowchart of a method of context awareness shown in FIG. 1, according to an exemplary embodiment;
FIG. 4 is another flow diagram of step 103 of the flow diagram of a method of context awareness shown in FIG. 1, according to an exemplary embodiment;
FIG. 5 is another flow diagram of step 103 of the flow diagram of a method of context awareness shown in FIG. 1, according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating an environment-aware apparatus according to an example embodiment;
FIG. 7 is a block diagram illustrating another context-aware apparatus according to an example embodiment;
FIG. 8 is an apparatus block diagram of the data association matching module 603 in an environment-aware apparatus block diagram shown in FIG. 6 according to an example embodiment;
FIG. 9 is an apparatus block diagram of the data association matching module 603 in an environment-aware apparatus block diagram shown in FIG. 6 according to an example embodiment;
FIG. 10 is an apparatus block diagram of the data association matching module 603 in an environment-aware apparatus block diagram shown in FIG. 6 according to an example embodiment;
FIG. 11 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a method of context awareness, as shown in FIG. 1, including the following steps, according to an example embodiment.
Step 101, processing point cloud data according to a 3D point cloud perception model to generate first target object data.
It should be noted that, in the embodiment of the present application, since the PV-RCNN model combines advantages of the point-based method and the voxel-based method, and can improve the accuracy of data processing, the PV-RCNN model is preferably used as the 3D point cloud sensing model in the embodiment of the present invention. Inputting the point cloud data into a PV-RCNN model for processing, and obtaining first target object data according to the output of the PV-RCNN model, wherein the first target object data is related to the attribute of a target object; the first target object data may specifically be information such as a category longitudinal distance, a lateral distance, a vertical distance, a length, a width, a height, a heading angle, a 3d detection frame (the 3d detection frame includes a center point of the 3d detection frame and a length, a height, and a width of the 3d detection frame, which are used for framing the target object), and the like of the target object. The target object mentioned in the embodiments of the present application refers to any object that is moving or stationary in front of, behind, or to the side of the vehicle.
Further, in the embodiment of the present application, fig. 2 is a flowchart illustrating another environment sensing method according to an exemplary embodiment, and as shown in fig. 2, the following steps may be further included before step 101.
Step 210, point cloud data and visual data are obtained, wherein the point cloud data is related to the data of the target object obtained by the laser radar system, and the visual data is related to the data of the target object obtained by the visual camera system.
It should be noted that, in the embodiment of the present application, the laser radar system can quickly and accurately acquire data of a spatial three-dimensional coordinate system of a target object, that is, point cloud data; the vision camera system can rapidly and clearly acquire the two-dimensional geometric shape and color information of the target object, namely vision data. The point cloud data and the visual data are stored in the cloud, and therefore, the point cloud data and the visual data need to be acquired from the cloud through a timestamp.
And 102, processing the visual data according to the visual model to generate second target object data.
It should be noted that, in the embodiment of the present application, since the YOLO V3 model can improve the speed and the accuracy of data processing, the YOLO V3 model is preferably used as the visual model in the embodiment of the present invention. And inputting the visual data into a YOLO V3 model for processing, and obtaining second target object data according to the output of the YOLO V3 model, wherein the second target object data is related to the attribute of the target object. The second target object data may specifically be information such as a category of the target object and a 2d detection frame of the target object in a pixel coordinate system (the 2d detection frame includes a center point of the 2d detection frame and a width and a height of the 2d detection frame, and is used for framing the target object); the 2d detection frame is a frame line which is used for marking the identified target after data processing based on a plane image acquired by camera equipment in the visual model and has a rectangular shape; the outline of the identification target can be completely wrapped.
And 103, performing data association matching according to the first target object data and/or the second target object data to generate a data association result.
Further, in the embodiment of the present application, fig. 3 is a flowchart of step 103 in the flowchart of an environment sensing method shown in fig. 1 according to an exemplary embodiment, and as shown in fig. 3, step 103 may specifically include the following steps.
And 301, performing data association matching on data of previous and next frames of the target object according to the first target object data under the condition that the target object is detected by the laser radar system, and generating a data association matching result.
It should be noted that, in the embodiment of the present application, when a target object appears in the detection range of the laser radar system for the first time, first target object data corresponding to the target object at this moment is acquired; and when the laser radar system detects the target object again at the next moment, acquiring first target object data corresponding to the target object at the current moment. By analogy, the first target object data of the target object at the current moment and the first target object data of the target object at the previous moment can be obtained. Because the target object detected by the laser radar system at the current moment and the target object detected at the previous moment may be the same or different, an incidence matrix needs to be constructed by using euclidean distance and cosine similarity for the acquired first target object data of the target object at the current moment and the first target object data of the target object at the previous moment, and then a result of data association matching can be obtained according to an output result of the incidence matrix; wherein, the result of data association matching comprises: the target object data of the present time (the target object detected by the laser radar system at the present time and the target object detected at the previous time are the same target object), the new target object data (the target object detected by the laser radar system at the present time and the target object detected at the previous time are not the same target object, and the target object detected at the present time is a new target object), and the target object not existing at the present time (the target object is not detected by the laser radar system at the present time).
Further, in the embodiment of the present application, fig. 4 is another flowchart of step 103 in the flowchart of an environment sensing method shown in fig. 1 according to an exemplary embodiment, and as shown in fig. 4, step 103 may further include the following steps.
Step 401, performing data association matching on data of previous and subsequent frames of the target object according to the second target object data under the condition that the target object is detected by the vision camera system, and generating a data association matching result.
It should be noted that, in the embodiment of the present application, when a target object appears in the detection range of the vision camera system for the first time, second target object data corresponding to the target object at this moment is obtained; and when the vision camera system detects the target object again at the next moment, acquiring second target object data corresponding to the target object at the current moment. By analogy, the second target object data of the target object at the current moment and the second target object data of the target object at the previous moment can be obtained. Because the target object detected by the vision camera system at the current time and the target object detected at the previous time may be the same or different, an association matrix needs to be constructed by using an Intersection Over Unit (IOU) association method for the acquired second target object data of the target object at the current time and the acquired second target object data of the target object at the previous time, and then a result of data association matching can be obtained according to an output result of the association matrix; wherein, the result of data association matching comprises: the target object data of the complete match (the target object detected by the vision camera system at the current time and the target object detected at the previous time are the same target object), the new target object data (the target object detected by the vision camera system at the current time and the target object detected at the previous time are not the same target object, and the target object detected at the current time is a new target object), and the target object not existing at the current time (the target object not detected by the vision camera system at the current time).
Further, in the embodiment of the present application, fig. 5 is another flowchart of step 103 in the flowchart of an environment sensing method shown in fig. 1 according to an exemplary embodiment, and as shown in fig. 5, step 103 may further include the following steps.
Step 501, under the condition that the target object is detected by the laser radar system and the vision camera system together, performing data association matching on data of the target object according to the first target object data and the second target object data to generate a data association matching result.
It should be noted that, in the embodiment of the present application, when it is detected that the target object is present in the detection range common to the laser radar system and the vision camera system, the first target object data corresponding to the target object and the second target object data corresponding to the target object can be obtained according to step 101 and step 102, respectively. Because the first target object data corresponding to the target object acquired according to the laser radar system is three-dimensional coordinate data, and the second target object data corresponding to the target object acquired according to the visual camera system is 2D image data, in order to facilitate data association matching between the first target object data and the second target object data, the data of the first target object data and the data of the second target object data need to be aligned, that is, the first target object data which is three-dimensional coordinate data can be projected onto a 2D image, so that the form of the first target object data corresponding to the target object acquired according to the laser radar system is converted from the three-dimensional coordinate data into the 2D image data; the second target object data, which is 2D image data, may also be back-projected onto the three-dimensional coordinate system, so that the form of the second target object data corresponding to the target object acquired by the visual camera system is converted from the 2D image data into the three-dimensional coordinate data. However, if the second target object data, which is 2D image data, is cast back to the three-dimensional coordinate system, the prior information needs to be added to achieve the target object data, and the error of the prior information is increased. Therefore, it is necessary to select the former implementation, that is, the implementation by projecting the first target object data, which is three-dimensional coordinate data, onto the 2D image.
After the data of the first target object data and the data of the second target object data are aligned, data association matching needs to be performed on the first target object data corresponding to the target object acquired according to the laser radar system and the second target object data corresponding to the target object acquired according to the vision camera system by using a center point association method and an IOU association method, data association matching is performed on the first target object data and the second target object data corresponding to the same target object, and then a data association matching result can be obtained. Wherein, the result of data association matching comprises: the target object data of the present invention includes completely matched target object data (first target object data corresponding to a target object detected by the laser radar system and second target object data corresponding to a target object detected by the vision camera system correspond to the same target object), new target object data (first target object data corresponding to a target object detected by the laser radar system and second target object data corresponding to a target object detected by the vision camera system correspond to different target objects), and no target object at the present time (no target object detected by the laser radar system and the vision camera system at the present time).
And 104, tracking the target object according to the data association matching result.
It should be noted that, in the embodiment of the present application, tracking the target object is mainly used for managing the life cycle of the target object, that is, continuously tracking the position of the target object in the time range from when the target object is detected to when the target object is not detected; in the whole process of sensing and identifying the target object, the whole time period from generation to disappearance of the target object in the system is continuously identified, complete life cycle management is formed, the effectiveness of the target object is conveniently evaluated, the invalid or false target object is timely eliminated, and the reliability and the effectiveness of environment sensing are further improved.
Specifically, when the target object is only detected by the laser radar system, the laser radar system continuously tracks the first target object data before and after the target object according to the data association matching result obtained in step 301. When the data association matching result obtained in step 301 is completely matched target object data, the first target object data corresponding to the target object at the current time is input into the kalman filter algorithm, so that the first target object data corresponding to the target object at the next time can be predicted. And acquiring first target object data corresponding to the actual target object at the next moment. And correcting the predicted first target object data corresponding to the target object at the next moment according to the actually obtained first target object data corresponding to the target object at the next moment, so that the accuracy of the data processing by the Kalman filtering algorithm can be improved, and more accurate first target object data corresponding to the target object can be obtained at the next moment. When the data association matching result obtained in step 301 is new target object data, the first target object data corresponding to the new target object at the current time is input into the kalman filter algorithm, so that the first target object data corresponding to the new target object at the next time can be predicted. And acquiring first target object data corresponding to the new target object at the next moment. And correcting the predicted first target object data corresponding to the new target object at the next moment according to the actually obtained first target object data corresponding to the new target object at the next moment, so that the accuracy of the Kalman filtering algorithm processing data can be improved, and more accurate first target object data corresponding to the new target object can be obtained at the next moment. When the data association matching result obtained in step 301 is that no target object exists at the current time, recording the number of consecutive non-appearing frames of the corresponding target object, and when the number of consecutive non-appearing frames is greater than a preset frame number threshold, determining that the target object disappears; the preset frame number threshold value can be adjusted according to the continuous existence duration of the target object and the position data of the target object; the longer the time length that the target object has existed, the larger the frame number threshold, the shorter the time length that the target object has existed, and the smaller the frame number threshold. If the target object is located within the detection range of the sensor system, the frame number threshold is increased, and if the target object is located outside the detection range of the sensor system, the frame number threshold is decreased. In an actual situation, due to an error of a sensor system and the like, a situation that a target object disappears momentarily or the target object itself is an error may occur, so that setting the frame number threshold can avoid erroneously removing the target object, and the longer the existing time of the target object is, the smaller the possibility that the target object itself is an error is, and thus a longer frame number threshold is reserved for the target object; conversely, the shorter the time period during which the target object exists, the greater the likelihood that it is an error, and therefore the smaller the threshold number of frames reserved for it. If the target object appears in the detection range of the sensor system, if the target object is right in front of the sensor system or is very close to the sensor system, the target object may have certain threat to the current environment sensing system, the attention should be focused, and the frame number threshold should be properly adjusted to be large; if the target object appears outside the detection range of the sensor system, the target object will not pose a threat to the current environmental sensing system, and therefore, the threshold value of the frame number should be adjusted to be small appropriately.
When the target object is only detected by the vision camera system, the method for the vision camera system to continuously track the second target object data before and after the target object according to the data association matching result obtained in step 401 is similar to that when the target object is only detected by the laser radar system, and details are not repeated herein.
When the target object is detected by both the laser radar system and the vision camera system, the method for the laser radar system and the vision camera system to continuously track the first target object data and the second target object data before and after the target object according to the data association matching result obtained in step 501 is similar to that when the target object is detected by only the laser radar system, and details are not repeated herein.
Processing point cloud data according to a 3D point cloud sensing model to generate first target object data; processing the visual data according to the visual model to generate second target object data; performing data association matching according to the first target object data and/or the second target object data to generate a data association result; and tracking the target object according to the data association matching result. Through the technical scheme provided by the embodiment of the application, the laser radar system and the vision camera system can be commonly applied to the environment perception system of the intelligent networking automobile, so that the purpose of accurately perceiving the environment information can be achieved by fusing the advantages of the two sensor systems of the laser radar system and the vision camera system, and further, the problem that the precision and reliability requirements of the environment perception system cannot be met in the prior art is solved. The point cloud data are processed by the aid of the PV-RCNN model, and therefore the accuracy of point cloud data processing is improved; the method and the device have the advantages that the YOLO V3 model is utilized to process the visual data, so that the speed and the accuracy of processing the visual data are improved; according to the method and the device, the target object is tracked according to the data association matching result, so that the position of the target object can be continuously tracked within the time range from the time when the target object is detected to the time when the target object is not detected; in the whole process of sensing and identifying the target object, the whole time period from generation to disappearance of the target object in the system is subjected to continuous identification processing, so that the life cycle of the target object is managed, the effectiveness of the target object is conveniently evaluated, invalid or false target objects can be removed in time, and further, the reliability and the effectiveness of the environment sensing system are improved.
Fig. 6 is a block diagram illustrating an environment-aware apparatus according to an exemplary embodiment, and referring to fig. 6, the apparatus includes a first target object data acquisition module 601, a second target object data acquisition module 602, a data association matching module 603, and a target object tracking module 604.
A first target object data obtaining module 601, configured to process the point cloud data according to the 3D point cloud sensing model to generate first target object data;
a second target object data obtaining module 602, configured to process the visual data according to the visual model to generate second target object data;
the data association matching module 603 is configured to perform data association matching according to the first target object data and/or the second target object data, and generate a data association result;
and a target object tracking module 604, configured to track the target object according to the data association matching result.
Alternatively, FIG. 7 is a block diagram illustrating another context-aware apparatus according to an example embodiment. Referring to fig. 7, the apparatus includes a data acquisition module 701.
The data acquisition module 701 is configured to acquire point cloud data and visual data, where the point cloud data is related to data of a target object acquired by the laser radar system, and the visual data is related to data of the target object acquired by the visual camera system.
Optionally, fig. 8 is an apparatus block diagram of the data association matching module 603 in the environment-aware apparatus block diagram shown in fig. 6 according to an exemplary embodiment. Referring to fig. 8, the apparatus includes a first data association matching unit 801.
The first data association matching unit 801 is configured to perform data association matching on data of frames before and after a target object according to first target object data to generate a data association matching result when the target object is detected by the laser radar system.
Optionally, fig. 9 is an apparatus block diagram of the data association matching module 603 in the environment-aware apparatus block diagram shown in fig. 6 according to an exemplary embodiment. Referring to fig. 9, the apparatus includes a second data association matching unit 901.
A second data association matching unit 901, configured to perform data association matching on data of frames before and after the target object according to the second target object data when the target object is detected by the vision camera system, so as to generate a data association matching result.
Optionally, fig. 10 is an apparatus block diagram of the data association matching module 603 in the environment-aware apparatus block diagram shown in fig. 6 according to an exemplary embodiment. Referring to fig. 10, the apparatus includes a third data association matching unit 1001.
And a third data association matching unit 1001, configured to perform data association matching on data of the target object according to the first target object data and the second target object data when the target object is detected by both the laser radar system and the vision camera system, and generate a data association matching result.
Processing point cloud data according to a 3D point cloud sensing model to generate first target object data; processing the visual data according to the visual model to generate second target object data; performing data association matching according to the first target object data and the second target object data to generate a data association result; and tracking the target object according to the data association matching result. Through the technical scheme that the embodiment of this application provided, can be applied to the environment perception system of intelligent networking car with laser radar system and vision camera system jointly, and then can fuse the advantage of two sensor systems of laser radar system and vision camera system and reach the purpose of accurate perception environmental information, furtherly has solved prior art and has been difficult to satisfy the problem of environment perception system's precision and reliability requirement. The point cloud data are processed by the aid of the PV-RCNN model, and therefore the accuracy of point cloud data processing is improved; the method and the device have the advantages that the YOLO V3 model is utilized to process the visual data, so that the speed and the accuracy of processing the visual data are improved; according to the method and the device, the target object is tracked according to the data association matching result, so that the position of the target object can be continuously tracked within the time range from the time when the target object is detected to the time when the target object is not detected; in the whole process of sensing and identifying the target object, the whole time period from generation to disappearance of the target object in the system is subjected to continuous identification processing, so that the life cycle of the target object is managed, the effectiveness of the target object is conveniently evaluated, invalid or false target objects can be removed in time, and further, the reliability and the effectiveness of the environment sensing system are improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
A third embodiment of the present invention relates to an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the context awareness method of the first aspect.
A fourth embodiment of the present invention provides a computer-readable storage medium, in which instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the context awareness method according to the first aspect.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Fig. 11 is a block diagram illustrating a method for an electronic device 1100, according to an example embodiment. For example, the electronic device 1100 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 11, electronic device 1100 may include one or more of the following components: processing component 1102, memory 1104, power component 1106, multimedia component 1108, audio component 1110, input/output interfaces 1112, sensor component 1114, and communications component 1116.
The processing component 1102 generally controls the overall operation of the device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1102 may include one or more processors 1120 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1102 may include one or more modules that facilitate interaction between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
The memory 1104 is configured to store various types of data to support operation at the device 1100. Examples of such data include instructions for any application or method operating on device 1100, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1104 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1106 provides power to the various components of the electronic device 1100. The power components 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 1100.
The multimedia component 1108 includes a screen that provides an output interface between the electronic device 1100 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1108 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1100 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1110 is configured to output and/or input audio signals. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1104 or transmitted via the communication component 1116. In some embodiments, the audio assembly 1110 further includes a speaker for outputting audio signals.
The input/output interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1114 includes one or more sensors for providing various aspects of state assessment for the electronic device 1100. For example, the sensor assembly 1114 can detect an open/closed status of the electronic device 1100, the relative positioning of components, such as a display and keypad of the electronic device 1100, the sensor assembly 1114 can also detect a change in position of the electronic device 1100 or a component of the electronic device 1100, the presence or absence of user contact with the electronic device 1100, orientation or acceleration/deceleration of the electronic device 1100, and a change in temperature of the electronic device 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1116 is configured to facilitate wired or wireless communication between the electronic device 1100 and other devices. The electronic device 1100 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1116 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1116 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1104 comprising instructions, executable by the processor 1120 of the electronic device 1100 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes can be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A method of environmental awareness, the method comprising:
processing the point cloud data according to the 3D point cloud perception model to generate first target object data;
processing the visual data according to the visual model to generate second target object data;
performing data association matching according to the first target object data and/or the second target object data to generate a data association result;
and tracking the target object according to the data association matching result.
2. The method of claim 1, further comprising, prior to the step of processing the point cloud data according to the 3D point cloud perceptual model to generate the first target object data:
point cloud data and visual data are obtained, wherein the point cloud data are related to the data of the target object obtained by the laser radar system, and the visual data are related to the data of the target object obtained by the visual camera system.
3. The method according to claim 1, wherein the performing data association matching according to the first target object data and the second target object data to generate a data association result comprises:
and under the condition that the target object is detected by the laser radar system, performing data association matching on data of frames before and after the target object according to the first target object data to generate a data association matching result.
4. The method according to claim 1, wherein the performing data association matching according to the first target object data and the second target object data to generate a data association result further comprises:
and under the condition that the target object is detected by the vision camera system, performing data association matching on the data of the front frame and the data of the rear frame of the target object according to the second target object data to generate a data association matching result.
5. The method according to claim 1, wherein the performing data association matching according to the first target object data and the second target object data to generate a data association result further comprises:
and under the condition that the target object is jointly detected by the laser radar system and the vision camera system, performing data association matching on the data of the target object according to the first target object data and the second target object data to generate a data association matching result.
6. An environment-aware apparatus, comprising:
the first target object data acquisition module is used for processing the point cloud data according to the 3D point cloud perception model to generate first target object data;
the second target object data acquisition module is used for processing the visual data according to the visual model to generate second target object data;
the data association matching module is used for performing data association matching according to the first target object data and/or the second target object data to generate a data association result;
and the target object tracking module is used for tracking the target object according to the data association matching result.
7. The apparatus of claim 6, further comprising:
the data acquisition module is used for acquiring point cloud data and visual data, wherein the point cloud data is related to the data of the target object acquired by the laser radar system, and the visual data is related to the data of the target object acquired by the visual camera system.
8. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the context aware method of any of claims 1 to 5.
9. A computer storage medium having instructions that, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the context aware method of any one of claims 1 to 5.
10. A vehicle, characterized in that it comprises a device for environmental perception according to claims 6-7.
CN202211466683.0A 2022-11-22 2022-11-22 Environment sensing method and device, electronic equipment, storage medium and vehicle Pending CN115825979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211466683.0A CN115825979A (en) 2022-11-22 2022-11-22 Environment sensing method and device, electronic equipment, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211466683.0A CN115825979A (en) 2022-11-22 2022-11-22 Environment sensing method and device, electronic equipment, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN115825979A true CN115825979A (en) 2023-03-21

Family

ID=85530232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211466683.0A Pending CN115825979A (en) 2022-11-22 2022-11-22 Environment sensing method and device, electronic equipment, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN115825979A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking
CN117152197B (en) * 2023-10-30 2024-01-23 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking

Similar Documents

Publication Publication Date Title
US10484948B2 (en) Mobile terminal standby method, device thereof, and medium
CN106778773B (en) Method and device for positioning target object in picture
US20170154206A1 (en) Image processing method and apparatus
US20200327353A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN109145679B (en) Method, device and system for sending out early warning information
CN111105454B (en) Method, device and medium for obtaining positioning information
CN107784279B (en) Target tracking method and device
CN106557759B (en) Signpost information acquisition method and device
CN106225764A (en) Based on the distance-finding method of binocular camera in terminal and terminal
CN114267041B (en) Method and device for identifying object in scene
CN114841377B (en) Federal learning model training method and recognition method applied to image target recognition
CN114419572B (en) Multi-radar target detection method and device, electronic equipment and storage medium
CN114764911B (en) Obstacle information detection method, obstacle information detection device, electronic device, and storage medium
CN115825979A (en) Environment sensing method and device, electronic equipment, storage medium and vehicle
CN109484304B (en) Rearview mirror adjusting method and device, computer readable storage medium and rearview mirror
CN115861741B (en) Target calibration method and device, electronic equipment, storage medium and vehicle
CN106469446B (en) Depth image segmentation method and segmentation device
CN108596957B (en) Object tracking method and device
CN115223143A (en) Image processing method, apparatus, device, and medium for automatically driving vehicle
CN115100253A (en) Image comparison method, device, electronic equipment and storage medium
CN114723715A (en) Vehicle target detection method, device, equipment, vehicle and medium
CN107968742B (en) Image display method, device and computer readable storage medium
CN114626462B (en) Pavement mark recognition method, device, equipment and storage medium
CN113610056B (en) Obstacle detection method, obstacle detection device, electronic equipment and storage medium
CN113450298B (en) Multi-sensor-based view map processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination