CN113076830A - Environment passing area detection method and device, vehicle-mounted terminal and storage medium - Google Patents

Environment passing area detection method and device, vehicle-mounted terminal and storage medium Download PDF

Info

Publication number
CN113076830A
CN113076830A CN202110303848.1A CN202110303848A CN113076830A CN 113076830 A CN113076830 A CN 113076830A CN 202110303848 A CN202110303848 A CN 202110303848A CN 113076830 A CN113076830 A CN 113076830A
Authority
CN
China
Prior art keywords
area
millimeter wave
vehicle
view image
wave data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110303848.1A
Other languages
Chinese (zh)
Inventor
戴玉静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai OFilm Smart Car Technology Co Ltd
Original Assignee
Shanghai OFilm Smart Car Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai OFilm Smart Car Technology Co Ltd filed Critical Shanghai OFilm Smart Car Technology Co Ltd
Priority to CN202110303848.1A priority Critical patent/CN113076830A/en
Publication of CN113076830A publication Critical patent/CN113076830A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an environmental traffic area detection method and device, a vehicle-mounted terminal and a storage medium, and belongs to the technical field of image processing. The method is applied to the vehicle-mounted terminal and comprises the following steps: identifying an obstacle in the aerial view image by acquiring the aerial view image of the surrounding environment of the vehicle, and determining a position area of the obstacle in the aerial view image; acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment; acquiring position coordinates of millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image; and re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area. According to the method and the device, the position coordinates of millimeter wave data under a target coordinate system are combined, the scheme of decision-level information fusion is achieved, the impassable area and the passable area of the aerial view image are divided again, and accuracy of perception of the vehicle-mounted terminal on the surrounding environment is improved.

Description

Environment passing area detection method and device, vehicle-mounted terminal and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting an environmental traffic area, a vehicle-mounted terminal, and a storage medium.
Background
Vehicle-mounted terminals have been widely used in various vehicles, and most of them can implement content such as environment perception, path planning and path tracking. In the aspect of environment perception, the vehicle-mounted terminal generally senses the surrounding environment information of the vehicle by using a vehicle-mounted sensor such as a millimeter wave radar, a camera or a laser radar, and the like, so as to prepare for subsequent path planning and other contents. Currently, in environmental awareness, the passable area is mainly detected through visual perception, that is, the passable area in the surrounding environment is identified through shooting the surrounding environment of the vehicle by a camera and identifying through shot images.
In the technical scheme, the situation that the passable area in the surrounding environment is not accurately identified due to the fact that error identification is easy to occur in the image identification process is solved, and the accuracy of perception of the surrounding environment by the vehicle-mounted terminal is reduced.
Disclosure of Invention
The embodiment of the application provides an environment passing area detection method and device, a vehicle-mounted terminal and a storage medium, which can be used for repartitioning unviable areas and passable areas in the surrounding environment and improving the accuracy of perception of the surrounding environment by the vehicle-mounted terminal.
In one aspect, an embodiment of the present application provides an environmental traffic area detection method, where the method is applied to a vehicle-mounted terminal, and the method includes:
acquiring a bird's-eye view image of the surrounding environment of a vehicle, identifying an obstacle in the bird's-eye view image, and determining a position area of the obstacle in the bird's-eye view image;
acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment;
acquiring position coordinates of the millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image;
and re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area.
In the embodiment of the application, after the bird's-eye view image of the surrounding environment of the vehicle is acquired, the obstacle in the bird's-eye view image is identified, the position area of the obstacle in the bird's-eye view image is determined, the position coordinate of millimeter wave data in a target coordinate system is combined, the position relation between the position coordinate and the position area is obtained, the scheme of decision-level information fusion is achieved, the impassable area and the passable area of the bird's-eye view image are divided again, the correction effect is obtained, and the accuracy of perception of the surrounding environment by the vehicle-mounted terminal is improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, the repartitioning the impassable area and the passable area of the bird's-eye view image according to the position relationship between the position coordinates and the position area includes:
when the position coordinates comprise position coordinates outside the position area, clustering the position coordinates according to the distance between the position coordinates to obtain a clustering result;
acquiring a target category from the clustering result, wherein the target category comprises a category of the position coordinates outside the position area;
and re-dividing the impassable area and the passable area of the aerial view image according to the position coordinates in the target category.
In the embodiment of the application, when the position coordinates of the millimeter wave data in the target coordinate system include position coordinates outside a position area where an obstacle is located, clustering is performed on the position coordinates according to the distance between the position coordinates to obtain a clustering result, and the impassable area and the passable area of the bird's-eye view image are re-divided through the position coordinates in the target category, so that the error of division of the impassable area in the bird's-eye view image is reduced, and the accuracy of sensing the surrounding environment by the vehicle-mounted terminal is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, the repartitioning the unvaryable area and the passable area of the bird's-eye view image according to the position coordinates in the object category includes:
acquiring a first line segment according to a position coordinate in a first category, wherein the first category is any one of the target categories;
acquiring a target extension line according to the first line segment;
and re-dividing the impassable area and the passable area of the aerial view image according to the first line segment and the target extension line.
In the embodiment of the application, when the position coordinates in the target category are re-divided, the target extension line is obtained through the first line segment formed by the position coordinates, the obstacle boundary detected in the millimeter wave data is obtained, the corresponding obstacle boundary in the millimeter wave data is determined, and the application range of the millimeter wave data is expanded.
As an optional implementation manner, in an aspect of the embodiment of the present application, the obtaining a target extension line according to the first line segment includes:
determining a shooting area of a first end point in the aerial view image, wherein the first end point is any one end point in the first line segment;
determining a camera corresponding to the shooting area and a setting position of the camera relative to a vehicle body;
and extending the first end point along the direction matched with the setting position to generate the target extension line.
In the embodiment of the application, the shooting areas of the two end points in the bird's-eye view image are determined according to the two end points in the first line segment, the cameras corresponding to the shooting areas are determined, the arrangement positions of the cameras relative to the vehicle body are extended along the direction matched with the arrangement positions, the target extension line is generated, extension of the boundary of the obstacle indicated in the millimeter wave data is achieved, and practicability of the millimeter wave data is improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, the repartitioning the impassable area and the passable area of the bird's-eye view image according to the position relationship between the position coordinates and the position area includes:
when the position coordinates coincide with the edge coordinates of the position area, determining an area boundary formed by the edge coordinates of the position area;
dividing the impassable area and the passable area of the aerial view image according to the area boundary.
In the embodiment of the present application, when the position coordinates coincide with the edge coordinates of the impassable area, the bird's-eye view image is still divided according to the edge coordinates of the impassable area, that is, the millimeter wave data coincide with the obstacle boundaries in the bird's-eye view image, and the bird's-eye view image is divided by using the original edge coordinates.
As an alternative implementation, in an aspect of the embodiments of the present application, the identifying an obstacle in the bird's eye view image and determining a position area of the obstacle in the bird's eye view image includes:
acquiring an object label of an obstacle by identifying the obstacle in the aerial view image;
and determining the position area of the obstacle in the aerial view image according to the object label of the obstacle.
In the embodiment of the application, the accuracy of judging the impassable area in the aerial view image can be improved by acquiring the object label of the obstacle after identifying the obstacle in the aerial view image, determining the impassable area of the obstacle in the aerial view image according to the object label, and determining whether the obstacle is the obstacle or not through the object label.
As an optional implementation manner, in an aspect of the embodiments of the present application, the location area includes a first area, and the first area does not include any one of the location coordinates; the newly dividing the unviable area and the passable area of the bird's-eye view image according to the position relationship between the position coordinates and the unviable area comprises the following steps:
acquiring an object label corresponding to the first area after the aerial view image is identified;
when the object label is a preset label, calibrating the first area according to the boundary of the first area, and dividing the first area into the impassable area;
when the object tag is not the preset tag, dividing the first area into the passable area.
In the embodiment of the application, the position area where the obstacle is located further comprises a first area, the first area is not identified in the millimeter wave data, and whether the millimeter wave data is in error or the bird's-eye view image is in error is determined according to the object label of the first area, so that mutual detection between the millimeter wave data and the bird's-eye view image is realized, and accuracy of perception of the vehicle-mounted terminal on the surrounding environment is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, before the acquiring millimeter wave data collected by a millimeter wave radar of the vehicle-mounted terminal on a surrounding environment, the method further includes:
acquiring a third acquisition frequency according to the first acquisition frequency of the millimeter wave radar and the second acquisition frequency of the camera;
the acquiring of the millimeter wave data collected by the millimeter wave radar of the vehicle-mounted terminal to the surrounding environment comprises the following steps:
acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment according to the third acquisition frequency;
the acquiring of the image picture obtained by shooting the surrounding environment by the camera includes:
and acquiring an image picture obtained by shooting the surrounding environment by the camera according to the third acquisition frequency.
In the embodiment of the application, the acquisition frequency of the millimeter wave data and the acquisition frequency of the camera are unified, so that the time unification of the acquired millimeter wave data and the aerial view image is realized, the unification of the combination of the millimeter wave data and the visual data is improved, and the accuracy of data analysis is improved.
As an optional implementation manner, in an aspect of an embodiment of the present application, the millimeter wave data further includes speed information of an environmental object corresponding to the millimeter wave data with respect to a vehicle, and before the acquiring the position coordinates of the millimeter wave data in a target coordinate system, the method further includes:
and screening the millimeter wave data acquired by the millimeter wave radar according to the speed information.
In the embodiment of the application, the obtained millimeter wave data can be screened, the reliability of the millimeter wave data is improved, and the accuracy of the vehicle-mounted terminal for sensing the surrounding environment is improved.
As an optional implementation manner, in an aspect of an embodiment of the present application, the screening the millimeter wave data according to the speed information includes:
acquiring historical millimeter wave data acquired by the millimeter wave radar in at least one acquisition period adjacent to the current acquisition period;
acquiring a speed variance according to the speed information of the current acquisition period and the speed information in the historical millimeter wave data;
and screening the millimeter wave data acquired in the current acquisition period according to the speed variance.
In the embodiment of the application, the speed variance is calculated through the speed information of the current acquisition period and the speed information in the historical millimeter wave data, and the obtained millimeter wave data is screened through the speed variance, so that the accuracy of millimeter wave data screening is improved.
As an optional implementation manner, in an aspect of an embodiment of the present application, the screening the millimeter wave data according to the speed information includes:
and screening the millimeter wave data acquired by the millimeter wave radar according to the speed information and a preset speed threshold.
In the embodiment of the application, the obtained millimeter wave data is screened through the speed information and the preset speed threshold, so that the accuracy of millimeter wave data screening is improved.
In another aspect, an embodiment of the present application provides an environmental traffic area detection device, where the device is applied to a vehicle-mounted terminal, and the device includes:
the device comprises an area determination module, a display module and a control module, wherein the area determination module is used for acquiring a bird's-eye view image of the surrounding environment of a vehicle, identifying an obstacle in the bird's-eye view image and determining an impassable area of the obstacle in the bird's-eye view image;
the data acquisition module is used for acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment;
the coordinate acquisition module is used for acquiring the position coordinates of the millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image;
and the image dividing module is used for re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the impassable area.
In another aspect, an embodiment of the present application provides an in-vehicle terminal, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor implements the method for detecting an environment passing area according to the above aspect and any optional implementation manner thereof.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the environmental traffic region detection method according to the above another aspect and its optional modes.
In another aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the method for detecting an environmental passage area according to one aspect.
In another aspect, an embodiment of the present application provides an application publishing platform, configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to execute the method for detecting an environmental passage area according to one aspect.
The technical scheme provided by the embodiment of the application can at least comprise the following beneficial effects:
the vehicle-mounted terminal identifies an obstacle in the aerial view image by acquiring the aerial view image of the surrounding environment of the vehicle, and determines the position area of the obstacle in the aerial view image; acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment; acquiring position coordinates of millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image; and re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area. According to the method and the device, the position relation between the position coordinate and the position area where the barrier is located is obtained by combining the position coordinate of the millimeter wave data under the target coordinate system, the scheme of decision-level information fusion is achieved, the impassable area and the passable area of the aerial view image are divided again, the corrected effect is obtained, and the accuracy of perception of the vehicle-mounted terminal on the surrounding environment is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting an environmental traffic zone according to an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for detecting an environmental traffic zone according to an exemplary embodiment of the present application;
fig. 3 is a schematic image diagram of a predictive image according to an exemplary embodiment of the present application;
fig. 4 is a schematic diagram of a predicted image including millimeter wave data according to an exemplary embodiment of the present application;
fig. 5 is a schematic diagram of a shot area in a predicted image according to an exemplary embodiment of the present application;
fig. 6 is a block diagram of an environment passage area detection device according to an exemplary embodiment of the present application;
fig. 7 is a schematic structural diagram of a vehicle-mounted terminal disclosed in an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It should be noted that the terms "first", "second", "third" and "fourth", etc. in the description and claims of the present application are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and "having," and any variations thereof, of the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The scheme provided by the application can be used for the vehicle-mounted terminal for driving the vehicle in daily life, and the process of sensing the surrounding environment of the vehicle is realized through the vehicle-mounted terminal.
Bird's-eye view image: the bird's eye view is a perspective view drawn by looking down the ground from a certain point at a high altitude by a high viewpoint perspective method according to the perspective principle. That is, an image viewed looking down a certain area in the air is more realistic than a plan view.
In daily life, the change of the surrounding environment of the vehicle during the running process of the vehicle affects the running safety of the vehicle, and therefore, the detection of the surrounding environment of the vehicle is particularly important. After the vehicle detects the surrounding environment, a fish eye image of the surrounding environment can be obtained generally, and the vehicle identifies the fish eye image through the vehicle-mounted terminal of the vehicle to know which objects are contained in the surrounding environment. For example, during the running process of the vehicle, a pedestrian appears right in front of the vehicle, other vehicles are nearby, and the vehicle acquires images of the surrounding environment of the vehicle, recognizes the images, and recognizes information such as the pedestrian in front of the vehicle, so that the running safety of the vehicle is improved.
Alternatively, the above application to environmental perception is more widely used in the field of automatic driving. In the process of vehicle running, the vehicle needs to control the vehicle to run through the vehicle-mounted terminal, so that the vehicle needs to acquire the information of the surrounding environment of the vehicle at any time through the vehicle-mounted terminal, adjust the vehicle running route in real time and the like.
In the aspect of environment perception, the vehicle-mounted terminal generally senses the surrounding environment information of the vehicle by using a vehicle-mounted sensor such as a millimeter wave radar, a camera or a laser radar, and the like, so as to prepare for subsequent path planning and other contents. The image information collected by the camera can be regarded as visual perception data, and the information collected by the millimeter wave radar or the laser radar can be regarded as radar data. For the visual perception data, because the image recognition model usually has recognition errors, the detection result is easily affected by various factors such as road conditions, weather, illumination and the like, and the environmental information acquired based on the visual perception data is not stable enough. For example, in practical application, because image features of some obstacles are closer to a road surface, such as a wall surface and a stone pier, the image obtained by photographing with the camera is used to detect the environment where the obstacle is located, and the obstacles such as the wall surface and the stone pier are easily identified as the road surface, so that a missing detection situation occurs. Image characteristics of some passable areas are close to obstacles, such as floor stains, reflection of the obstacles in surface water, fallen leaves, mottled tree shadows and the like, and the stains, the reflection and the like are easily recognized as the obstacles, so that detection errors occur.
For radar data, the radar data has strong penetrability and is less influenced by the environment, the surrounding environment of the vehicle can be identified, but millimeter wave data is sparse, cannot be imaged, cannot be used for identifying images or colors, and cannot be used for semantic analysis. Therefore, for an external environment that changes around the vehicle at any time, the detection result of the radar data is relatively single, and the application scenario is limited.
In order to improve the accuracy of the vehicle-mounted terminal for sensing the surrounding environment, the method and the system for detecting the vehicle surrounding environment based on the visual sensing data have the advantages that the radar data can be combined to detect the surrounding environment of the vehicle, more accurate unviable areas and passable areas in the surrounding environment are obtained, and the driving safety of the vehicle is improved.
Referring to fig. 1, a flowchart of a method for detecting an environmental traffic zone according to an exemplary embodiment of the present application is shown. The environment passing area detection method is applied to the vehicle-mounted terminal, and as shown in fig. 1, the environment passing area detection method can comprise the following steps.
Step 101, acquiring a bird's-eye view image of the surrounding environment of the vehicle, identifying an obstacle in the bird's-eye view image, and determining a position area of the obstacle in the bird's-eye view image.
Alternatively, the bird's eye view image of the surroundings of the vehicle may include images in four directions of the front, rear, left, and right of the vehicle. For example, the vehicle is equipped with four cameras all around, through these four cameras, acquires the image in four directions all around simultaneously, and to these four directional images, the vehicle-mounted terminal is through discerning the barrier that contains in the bird's-eye view image respectively, confirms the position area that the barrier was located in the bird's-eye view image, and optionally, in the bird's-eye view image, the impassable area is the position area that the barrier was located in general.
For example, the obstacle may be a pedestrian, a bicycle, a wall, a house, a tree, a stone pier, a door, another vehicle, a building, a utility pole, or the like. The vehicle-mounted terminal can identify each object contained in the bird's-eye view image, find the position area which is the obstacle from the objects, and determine the position area of the obstacle in the bird's-eye view image.
And step 102, acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment.
Optionally, the vehicle is provided with a millimeter wave radar, and the vehicle-mounted terminal can acquire millimeter wave data acquired by the millimeter wave radar. The millimeter wave data may be acquired by collecting surrounding environments in four directions, i.e., front, rear, left, and right, of the vehicle. Because the detection signal transmissivity of millimeter wave radar transmission is high, a millimeter wave radar can be installed to the vehicle, and the surrounding environment of the vehicle in four directions of front, back, left and right is gathered through the millimeter wave radar. Of course, the vehicle may also be equipped with a plurality of millimeter wave radars, which is not limited in this application.
And 103, acquiring the position coordinates of the millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image.
The pixel coordinate system may be a coordinate system established by using a pixel point of an angular point in the bird's-eye view image as an origin of the coordinate system.
Optionally, the vehicle-mounted terminal acquires the position coordinates of the millimeter wave data in the target coordinate system according to the obtained millimeter wave data. When the millimeter wave data and the visual perception data are fused, the pixel coordinate system in the visual perception data is used as the coordinate system adopted during fusion, the millimeter wave data are converted from the radar coordinate system of the millimeter wave radar to the pixel coordinate system, and therefore the position coordinates of the millimeter wave data in the target coordinate system are obtained. For example, the in-vehicle terminal may be preset with a first conversion formula, where the first conversion formula is a calculation formula for converting the radar coordinate system into the pixel coordinate system, and the in-vehicle terminal calculates the position of the millimeter wave data in the radar coordinate system corresponding to the pixel coordinate system through the first conversion relation. Optionally, the vehicle-mounted terminal may also convert millimeter wave data from a radar coordinate system of the millimeter wave radar to a pixel coordinate system through interconversion between multiple coordinate systems.
And 104, re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area.
Optionally, after the vehicle-mounted terminal acquires the position coordinates of the millimeter wave data in the target coordinate system, the position relationship between the position coordinates and the position area may be acquired in the target coordinate system, for example, the position relationship may include: the vehicle-mounted terminal divides the impassable area and the passable area in the bird's-eye view image again according to the current position relationship under the conditions that the position area contains millimeter wave data, the position area is partially intersected with the millimeter wave data, the millimeter wave data is not contained in the position area, and the like, so that the effect of correcting the division of the area in the visual image by using the collected millimeter wave data is achieved, and the accuracy of the division of the impassable area and the passable area is improved.
In summary, the vehicle-mounted terminal acquires the bird's-eye view image of the surrounding environment of the vehicle, identifies the obstacle in the bird's-eye view image, and determines the position area of the obstacle in the bird's-eye view image; acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment; acquiring position coordinates of millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image; and re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area. According to the method and the device, the position relation between the position coordinate and the position area where the barrier is located is obtained by combining the position coordinate of the millimeter wave data under the target coordinate system, the scheme of decision-level information fusion is achieved, the impassable area and the passable area of the aerial view image are divided again, the corrected effect is obtained, and the accuracy of perception of the vehicle-mounted terminal on the surrounding environment is improved.
In one possible implementation manner, in the process of identifying the obstacle in the bird's eye view image and determining the impassable area of the obstacle in the bird's eye view image, the vehicle-mounted terminal may acquire the object tags of the respective objects in the bird's eye view image and determine the impassable area of the obstacle in the bird's eye view image based on the object tags. Referring to fig. 2, a flowchart of a method for detecting an environmental traffic zone according to an exemplary embodiment of the present application is shown. The environment passing area detection method is applied to the vehicle-mounted terminal, and as shown in fig. 2, the environment passing area detection method can comprise the following steps.
Step 201, acquiring a third acquisition frequency according to the first acquisition frequency of the millimeter wave radar and the second acquisition frequency of the camera.
Optionally, in this application embodiment, millimeter wave radar and camera can be installed around the vehicle, and the millimeter wave radar can gather the vehicle all around the surrounding environment in four directions about, obtains millimeter wave data, and the camera also can gather the vehicle all around the surrounding environment in four directions about, obtains the environment image. In general, the acquisition frequency of the millimeter wave radar is different from the acquisition frequency of the camera, and when the data of the millimeter wave radar and the data of the camera are fused, the data acquired by the millimeter wave radar and the data acquired by the camera need to be unified in time. For example, the acquisition frequency of the millimeter wave radar is 20Hz (hertz), the acquisition frequency of the camera is 30Hz, the vehicle-mounted terminal can calculate a third acquisition frequency according to 20Hz and 30Hz, and when the image acquired by the camera and the millimeter wave data acquired by the millimeter wave radar are extracted according to the third acquisition frequency, the millimeter wave data and the image acquired by the camera at the same time can be acquired. For example, the in-vehicle terminal may calculate a minimum common factor between the acquisition frequency of the millimeter wave radar and the acquisition frequency of the camera, and take the minimum common factor as the third sampling frequency.
Optionally, if the acquisition frequency of the vehicle-mounted terminal is the same as the acquisition frequency of the camera, the acquired two data may be directly acquired according to the acquisition frequency of the vehicle-mounted terminal or the acquisition frequency of the camera, and this step may be omitted, and is not repeated here.
Step 202, acquiring a bird's-eye view image of the surrounding environment of the vehicle according to the third acquisition frequency, identifying an obstacle in the bird's-eye view image, and determining a position area of the obstacle in the bird's-eye view image.
In other words, after the third acquisition frequency is obtained, the bird's eye view image of the environment around the vehicle is acquired by extracting the image captured by the camera at the third acquisition frequency. Optionally, in this embodiment of the application, a plurality of cameras may be arranged around the vehicle, and images captured by these cameras may be spliced to combine into an aerial view image of the surroundings of the vehicle. For example, the vehicle-mounted terminal uses the cameras to photograph the surrounding environment according to the second sampling frequency, the bird's-eye view image obtained by splicing the images of the cameras is obtained each time, the frequency of obtaining the bird's-eye view image by the vehicle-mounted terminal is the second sampling frequency at this time, and after the third sampling frequency is obtained, the vehicle-mounted terminal can obtain the spliced bird's-eye view image according to the third sampling frequency, so that the bird's-eye view image corresponding to the millimeter wave data in time is obtained.
Optionally, after the bird's-eye view image is obtained, the vehicle-mounted terminal may further identify an obstacle in the bird's-eye view image, and obtain an object tag of the obstacle; and determining the impassable area of the barrier in the bird's-eye view image according to the object label of the barrier. Wherein, the process can be executed by an image recognition model preset in the vehicle-mounted terminal. That is, the in-vehicle terminal may input the obtained bird's-eye view image to the image recognition model, and recognize the bird's-eye view image by the image recognition model, thereby specifying the area of the obstacle in the bird's-eye view image.
Optionally, the image recognition model may be obtained by training an aerial view image labeled manually. For example, a developer may manually label a certain number of overhead view images, and mark objects such as roads, pedestrians, vehicles, walls, and the like in the overhead view images in a manual labeling manner to obtain object labels corresponding to the objects, where different object labels may indicate different objects. The bird's-eye view image is a three-channel Red Green Blue (RGB) image, different obstacles can be represented by using different pixel values during labeling, and a single-channel gray image is formed, wherein the width and the height of the gray image are the same as those of the original bird's-eye view image. For example, in a bird's-eye view image, pixel values of respective objects are different and bird's-eye view images of different colors are displayed, and a developer can acquire a single-channel grayscale image having the same width and height as those of the original bird's-eye view image by labeling pixel values belonging to one object, such as "1" representing a pedestrian, "2" representing a wall surface, and "3" representing a road, and use the bird's-eye view image and the corresponding grayscale image as sample data for model training. In this case, other objects than the road can be regarded as obstacles affecting the running of the vehicle.
Optionally, in the model building process, a developer may select a neural network model basis and modify the neural network model basis into a semantic segmentation model, so that the machine learning model may output a single-channel grayscale image having the same width and height as the bird's-eye view image by inputting the bird's-eye view image to the machine learning model, that is, the predicted image. In the process of training the model, the machine learning model can compare the obtained predicted image with the artificially labeled object label, can calculate the segmentation accuracy of the loss function and the predicted data, reduces the loss function value of the model in a back propagation mode, and finally enables the machine learning model to be converged to obtain the trained image recognition model.
The in-vehicle terminal may input the acquired bird's-eye view image to the trained image recognition model, and finally obtain a predicted image from the image recognition model, where the predicted image may reflect a position area of the obstacle in the bird's-eye view image. Please refer to fig. 3, which shows an image schematic diagram of a predicted image according to an exemplary embodiment of the present application. As shown in fig. 3, the predicted image 300 includes a vehicle 301, a first obstacle 302, a second obstacle 303, and a passable region 304. The vehicle 301 may be a vehicle equipped with a vehicle-mounted terminal, the first obstacle 302 may be a tree, the second obstacle 303 may be a building, and the like, and the passable area 304 indicates an area where the vehicle can travel.
And 203, acquiring millimeter wave data acquired by the millimeter wave radar of the vehicle-mounted terminal for the surrounding environment according to the third acquisition frequency, wherein the millimeter wave data comprises speed information of an environment object corresponding to the millimeter wave data relative to the vehicle.
Optionally, the vehicle-mounted terminal acquires millimeter wave data according to the obtained third acquisition frequency. The millimeter wave data can be four-dimensional data, a space rectangular coordinate system is established by taking a millimeter wave radar as an origin of coordinates in a millimeter wave coordinate system, each point in the space contains motion information relative to the origin in addition to information of three coordinate axes relative to the origin, namely, the millimeter wave data comprises speed information of an environment object corresponding to the millimeter wave data relative to a vehicle. The coordinates of any point in space can be represented as (a, b, c, V). Wherein a represents information of the first coordinate axis, b represents information of the second coordinate axis, c represents information of the third coordinate axis, and V represents a moving speed of the point relative to the origin.
For example, if there is a pedestrian in front of the vehicle, the pedestrian travels toward the vehicle, the speed of the pedestrian is 1 meter per second, and the speed of the vehicle is 2 meters per second, then, in the millimeter wave data collected by the millimeter wave radar for the surrounding environment, the millimeter wave data representing the pedestrian includes information (3 meters per second) about the speed of the pedestrian with respect to the vehicle, in addition to the position of the pedestrian with respect to the millimeter wave radar.
And 204, screening the millimeter wave data acquired by the millimeter wave radar according to the speed information.
Optionally, the millimeter wave data can be screened, the unrealistic millimeter wave data existing due to detection errors are removed, the reliability of the data is improved, and the accuracy of subsequent output results is improved. In a possible implementation manner, the vehicle-mounted terminal can acquire historical millimeter wave data acquired by the millimeter wave radar in at least one acquisition period adjacent to the current acquisition period; acquiring a speed variance according to the speed information of the current acquisition period and the speed information in the historical millimeter wave data; and screening the millimeter wave data acquired in the current acquisition period according to the speed variance. That is, for a certain object in the vehicle surroundings, in two adjacent sampling periods, the millimeter wave data of the two are continuously present, and the speed difference is small. If the velocity variance of the same object in several consecutive frames of data is greater than the variance threshold, the point is removed. The remaining millimeter wave data is regarded as usable data, and the remaining millimeter wave data is millimeter wave data subjected to subsequent processing.
For example, in two consecutive acquisition cycles, millimeter wave data acquired by the vehicle-mounted terminal after acquiring the surrounding environment includes data of a certain stone pier, in the first acquisition cycle, the speed information of the stone pier is 5 meters per second, but in the second acquisition cycle, the speed information of the stone pier is 55 meters per second, the vehicle-mounted terminal can acquire that the speed variance is 625, and if the variance threshold is 10, it is described that the data acquired by the vehicle-mounted terminal has an error, and the millimeter wave data can be removed. If the millimeter wave data comprises data of a certain wall surface, in a first acquisition period, the speed information of the wall surface is 5 meters per second, in a second acquisition period, the speed information of the wall surface is 6 meters per second, the vehicle-mounted terminal can acquire that the speed variance is 0.25, and if the variance threshold is 10, the millimeter wave data can be retained, so that the effect of filtering the millimeter wave data is achieved. Wherein the variance threshold may be preset by a developer.
In a possible implementation manner, the vehicle-mounted terminal may screen the millimeter wave data collected by the millimeter wave radar according to the speed information and a preset speed threshold. For example, a preset speed threshold may also be preset in the vehicle-mounted terminal, and after the millimeter wave data is acquired, the millimeter wave data is screened according to the speed information of each millimeter wave data and the preset speed threshold. For example, the preset speed threshold is 200 km/h, and if the speed information in a certain millimeter wave data is greater than 200 km/h, the millimeter wave data is removed.
It should be noted that the above-mentioned screening method is also exemplary, two screening methods may be used in combination, or other screening methods (for example, the number of times that a certain object appears in two consecutive sampling periods, and if the number of times that the object appears is less than a preset number, the millimeter wave data is deleted) may be used to screen the millimeter wave data, which is not limited in this embodiment of the present application.
And step 205, acquiring the position coordinates of the millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image.
After the millimeter wave data is obtained, the vehicle-mounted terminal may perform coordinate conversion on the millimeter wave data to convert the millimeter wave data into a target coordinate system. Optionally, when the vehicle-mounted terminal converts the millimeter wave data, the millimeter wave data may be first converted into a world coordinate system centered on the camera, then the data in the world coordinate system is converted into a camera coordinate system, the data in the camera coordinate system is converted into an image coordinate system, and finally the data in the image coordinate system is converted into a pixel coordinate system.
The camera may be the above-described camera for capturing the image of the surrounding environment, and if the in-vehicle terminal is equipped with a plurality of cameras, the camera may be one of the plurality of cameras that is previously designated by the developer. The camera coordinate system is a coordinate system established by taking the camera as a coordinate origin. The image coordinate system is the coordinate system of the obtained bird's-eye view image, and the pixel coordinate system is a coordinate system established by taking a pixel point of one corner point in the bird's-eye view image as the origin of the coordinate system. After the series of conversion, the in-vehicle terminal acquires millimeter wave data corresponding to a pixel coordinate system, each millimeter wave data having a pixel coordinate in the pixel coordinate system, and according to the pixel coordinate, the millimeter wave data is fused with the obtained prediction image, for example, the millimeter wave data is added to the image shown in fig. 3.
And step 206, re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area.
Optionally, after obtaining the position coordinates of the millimeter wave data in the target coordinate system, performing data fusion, adding the millimeter wave data to the predicted image according to the position coordinates, determining a position relationship between the millimeter wave data and the predicted image according to the position coordinates of the millimeter wave data and the coordinates indicating the position area in the predicted image, and re-dividing the impassable area and the passable area of the bird's-eye view image.
In a possible implementation manner, when the position coordinates include position coordinates outside the position area, clustering the position coordinates according to the distance between the position coordinates to obtain a clustering result; acquiring a target category from the clustering result, wherein the target category comprises a category of position coordinates outside the position area; and re-dividing the impassable area and the passable area of the aerial view image according to the position coordinates in the target category.
Optionally, after obtaining the predicted image shown in fig. 3, the vehicle-mounted terminal may compare the coordinates corresponding to the edge of the obstacle with the position coordinates of the millimeter wave data in the target coordinate system, so as to obtain the position relationship between the two coordinates. When the position coordinates include position coordinates outside the position area, the vehicle-mounted terminal can perform clustering according to the distance between the position coordinates to obtain a clustering result. For example, please refer to fig. 4, which shows a schematic diagram of a predicted image according to an exemplary embodiment of the present application, which includes millimeter wave data. As shown in fig. 4, the predicted image 400 includes a vehicle 401, a first obstacle 402, a second obstacle 403, a third obstacle 404, first millimeter wave data 405, a category one 406, a category two 407, and a category three 408. Fig. 4 is a schematic diagram after the in-vehicle terminal adds millimeter wave data to the predicted image, and in fig. 4, the in-vehicle terminal can perform clustering according to the distance between the position coordinates in the first millimeter wave data 405 outside the position area of the second obstacle 403 to obtain a category one 406, a category two 407, and a category three 408. And acquiring a target category from the category I406, the category II 407 and the category III 408, namely the category II 407 containing the first millimeter wave data 405, and dividing the impassable area and the passable area of the bird's-eye view image again according to each position coordinate in the category II 407.
Optionally, the vehicle-mounted terminal may obtain the first line segment according to the position coordinate in the first category, where the first category is any one of the target categories; acquiring a target extension line according to the first line segment; and re-dividing the impassable area and the passable area of the aerial view image according to the first line segment and the target extension line. If there are a plurality of other millimeter wave data similar to the first millimeter wave data 405 in fig. 4, and after clustering, the data belong to different categories, the target categories that the vehicle-mounted terminal can acquire are a plurality of, and the vehicle-mounted terminal performs the step of acquiring the first line segment for each category. Taking the predicted image shown in fig. 4 as an example, the vehicle-mounted terminal may simulate a first line segment by linear regression according to each position coordinate in the first category, and obtain the target extension line through two end points of the first line segment.
In a possible implementation manner, the above-mentioned acquisition target extension line may be such that the vehicle-mounted terminal may determine a shooting area to which a first end point belongs in the bird's eye view image, the first end point being any one end point in the first line segment; determining a camera corresponding to the shooting area and a setting position of the camera relative to the vehicle body; and extending the first end point along the direction matched with the setting position to generate a target extension line. Alternatively, the direction matching the set position may be a direction extending from the set position perpendicularly to the vehicle body toward the vehicle outside.
For example, the in-vehicle terminal divides the shooting area of the bird's-eye view image according to the components shot by each camera. Referring to fig. 5, a schematic diagram of a shot area in a predicted image according to an exemplary embodiment of the present application is shown. As shown in fig. 5, the predicted image 500 includes an image pickup area 501 on the front side of the vehicle, an image pickup area 502 on the right side of the vehicle, an image pickup area 503 on the rear side of the vehicle, an image pickup area 504 on the left side of the vehicle, a first line segment 505, a first end 505a, a second end 505b, a first object extension line 506, and a second object extension line 507. Fig. 5 is a view showing a bird's-eye view image in which the number of the cameras is 4 and which is arranged in four directions of the front, rear, left, and right of the vehicle, respectively, and the on-vehicle terminal stitches images captured by the four cameras, and the predicted image has the same width and height as the bird's-eye view image, and the captured region in the predicted image corresponds to the captured region in the bird's-eye view image.
In fig. 5, after acquiring the pixel coordinates of the first end point 505a of the first line segment 505, the in-vehicle terminal can specify that the imaging area to which the first end point 505a belongs in the bird's eye view image is the vehicle front side imaging area 501, specify that the camera corresponding to the imaging area is the camera provided in front of the vehicle, recognize that the position of the camera provided with respect to the vehicle body is the front side, extend the first end point 505a in the direction matching the front side, and generate the first target extension line 506. Alternatively, the direction matching the front side may be a vertically forward direction along the vehicle itself. Accordingly, the in-vehicle terminal may specify that the imaging area to which the first end point 505b belongs in the bird's eye view image is the vehicle right side imaging area 502, and generate the second target extension line 507 by extending the second end point 505b in the direction matching the right side, with respect to the first end point 505b of the first line segment 505. The first line segment 505, the first target extension line 506 and the second target extension line 507 are taken as the boundary of the corresponding impassable area.
In one possible implementation, when the position coordinates coincide with edge coordinates of the position area, determining an area boundary formed by the edge coordinates of the position area; and dividing the impassable area and the passable area of the aerial view image according to the area boundary. That is, the millimeter wave data and the edge coordinates of the position area are superimposed, and it is sufficient to describe that the position where the millimeter wave radar detects the obstacle is the same as the position of the obstacle in the bird's eye view image, and the obstacle is detected and is directly divided according to the edge coordinates of the original impassable area. For example, in fig. 4, the position coordinates belonging to the category one 406 overlap with the edge coordinates of the first obstacle 402, and the in-vehicle terminal may divide the bird's-eye view image according to the edge coordinates of the first obstacle 402, and thus correction is not necessary.
In one possible implementation manner, when the position area includes a first area, the first area does not include any one of the position coordinates; the vehicle-mounted terminal can acquire an object label corresponding to the first area after the aerial view image is identified; when the object label is a preset label, calibrating a first area according to the boundary of the first area, and dividing the first area into an unviable area; when the object tag is not a preset tag, the first area is divided into passable areas.
For example, in fig. 4, a fourth obstacle 409 is further included, an area of the fourth obstacle 409 does not include any coordinate in the position coordinates of the acquired millimeter wave data in the target coordinate system, the area of the fourth obstacle 409 may be regarded as a first area in the impassable area at this time, the vehicle-mounted terminal may acquire an object tag corresponding to the first area after the bird's-eye view image is recognized in the image recognition model, determine whether the object tag is a preset tag, if the object tag is the preset tag, calibrate the first area according to a boundary of the first area, and divide the first area into the impassable area; if the first area is not the preset tag, the first area is divided into the passable area, or the first area can be regarded as deleted from the passable area. Optionally, the preset tag may be a tag corresponding to an obstacle that can be moved at any time, such as a pedestrian, a bicycle, or an electric vehicle.
That is, if it is determined that the obstacle is present on the predicted image, but the obstacle is not detected in the millimeter wave data, it indicates that there is a possibility of false detection in the visual perception of the obstacle. In this case, the object label discrimination of the obtained obstacle is performed on the bird's eye view image by acquiring: if the object tag is any one of a pedestrian, a bicycle and the like, judging that the millimeter wave data is subjected to false detection, removing the millimeter wave data, and dividing the region according to the barrier boundary in the prediction image; if the object tag is not any of a pedestrian, a bicycle, and the like, it is determined that a false detection has occurred when the bird's-eye view image is recognized, the obstacle region is removed, and the first region in the predicted image is changed to a passable region.
It should be noted that, the above several positional relationships may appear in the same predicted image, or may appear separately, and in the present application, different types of corrections may be performed according to the difference in positional relationship between the millimeter wave data and the unviable area in the predicted image, so as to achieve the effect of repartitioning the passable area and the unviable area.
In summary, the vehicle-mounted terminal acquires the bird's-eye view image of the surrounding environment of the vehicle, identifies the obstacle in the bird's-eye view image, and determines the position area of the obstacle in the bird's-eye view image; acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment; acquiring position coordinates of millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image; and re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area. According to the method and the device, the position relation between the position coordinate and the position area where the barrier is located is obtained by combining the position coordinate of the millimeter wave data under the target coordinate system, the scheme of decision-level information fusion is achieved, the impassable area and the passable area of the aerial view image are divided again, the corrected effect is obtained, and the accuracy of perception of the vehicle-mounted terminal on the surrounding environment is improved.
In addition, when the position coordinates in the target category are re-divided, the target extension line is obtained through the first line segment formed by the position coordinates, the obstacle boundary detected in the millimeter wave data is obtained, the corresponding obstacle boundary in the millimeter wave data is determined, the application scene of the millimeter wave data is expanded, and the practicability of the millimeter wave data is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 6, a block diagram of an environment passage area detection device 600 provided in an exemplary embodiment of the present application, where the environment passage area detection device may be applied to a vehicle-mounted terminal, is shown, and the environment passage area detection device includes: the image processing device comprises an area determining module 601, a data acquiring module 602, a coordinate acquiring module 603 and an image dividing module 604.
An area determination module 601, configured to acquire a bird's eye view image of a surrounding environment of a vehicle, identify an obstacle in the bird's eye view image, and determine a position area of the obstacle in the bird's eye view image;
a data obtaining module 602, configured to obtain millimeter wave data collected by a millimeter wave radar of the vehicle-mounted terminal for a surrounding environment;
a coordinate obtaining module 603, configured to obtain position coordinates of the millimeter wave data in a target coordinate system, where the target coordinate system is a pixel coordinate system of the bird's eye view image;
an image dividing module 604, configured to re-divide the impassable area and the passable area of the bird's-eye view image according to the position relationship between the position coordinates and the position area.
In summary, the vehicle-mounted terminal acquires the bird's-eye view image of the surrounding environment of the vehicle, identifies the obstacle in the bird's-eye view image, and determines the position area of the obstacle in the bird's-eye view image; acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment; acquiring position coordinates of millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image; and re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area. According to the method and the device, the position relation between the position coordinate and the position area where the barrier is located is obtained by combining the position coordinate of the millimeter wave data under the target coordinate system, the scheme of decision-level information fusion is achieved, the impassable area and the passable area of the aerial view image are divided again, the corrected effect is obtained, and the accuracy of perception of the vehicle-mounted terminal on the surrounding environment is improved.
Optionally, the image dividing module 604 includes: a first obtaining unit, a second obtaining unit and a first dividing unit;
the first obtaining unit is configured to, when the position coordinates include position coordinates located outside the position area, cluster the position coordinates according to a distance between the position coordinates, and obtain a clustering result;
a second obtaining unit, configured to obtain a target category from the clustering result, where the target category includes a category of a position coordinate outside the position area;
and the first dividing unit is used for re-dividing the impassable area and the passable area of the aerial view image according to the position coordinates in the target category.
Optionally, the first dividing unit includes: a first obtaining subunit, a second obtaining subunit and a dividing subunit;
a first obtaining subunit, configured to obtain a first line segment according to a position coordinate in a first category, where the first category is any one of the target categories;
the second obtaining subunit is used for obtaining a target extension line according to the first line segment;
and the dividing subunit is used for re-dividing the impassable area and the passable area of the aerial view image according to the first line segment and the target extension line.
Optionally, the second obtaining subunit is configured to,
determining a shooting area of a first end point in the aerial view image, wherein the first end point is any one end point in the first line segment;
determining a camera corresponding to the shooting area and a setting position of the camera relative to a vehicle body;
and extending the first end point along a direction matched with the setting position to generate the target extension line.
Optionally, the image dividing module 604 includes: a second dividing unit;
the second dividing unit is configured to determine an area boundary formed by edge coordinates of the position area when the position coordinates coincide with the edge coordinates of the position area;
dividing the impassable area and the passable area of the aerial view image according to the area boundary.
Optionally, the area determination module 601 is further configured to,
acquiring an object label of an obstacle by identifying the obstacle in the aerial view image;
and determining the position area of the obstacle in the aerial view image according to the object label of the obstacle.
Optionally, the position area includes a first area, and the first area does not include any coordinate in the position coordinates; the region determination module 601 is further configured to,
acquiring an object label corresponding to the first area after the aerial view image is identified;
when the object label is a preset label, calibrating the first area according to the boundary of the first area, and dividing the first area into the impassable area;
when the object tag is not the preset tag, dividing the first area into the passable area.
Optionally, the apparatus further comprises:
a frequency obtaining module, configured to obtain a third acquisition frequency according to a first acquisition frequency of the millimeter wave radar and a second acquisition frequency of the camera before the data obtaining module 602 obtains millimeter wave data acquired by the millimeter wave radar of the vehicle-mounted terminal for a surrounding environment;
the data obtaining module 602 is configured to obtain millimeter wave data collected by the millimeter wave radar of the vehicle-mounted terminal for the surrounding environment according to the third collection frequency;
the area determining module 601 is configured to obtain an aerial view image of a surrounding environment of the vehicle according to the third acquisition frequency.
Optionally, the millimeter wave data further includes speed information of an environmental object corresponding to the millimeter wave data relative to the vehicle; the device further comprises:
and a data screening module, configured to screen the millimeter wave data acquired by the millimeter wave radar according to the speed information before the coordinate obtaining module 603 obtains the position coordinate of the millimeter wave data in the target coordinate system.
Optionally, a data screening module, configured to,
acquiring historical millimeter wave data acquired by the millimeter wave radar in at least one acquisition period adjacent to the current acquisition period;
acquiring a speed variance according to the speed information of the current acquisition period and the speed information in the historical millimeter wave data;
and screening the millimeter wave data acquired in the current acquisition period according to the speed variance.
Optionally, the data screening module is configured to screen millimeter wave data acquired by the millimeter wave radar according to the speed information and a preset speed threshold.
Referring to fig. 7, a schematic structural diagram of a vehicle-mounted terminal disclosed in an exemplary embodiment of the present application is shown. As shown in fig. 7, may include: radio Frequency (RF) circuitry 710, memory 720, input unit 730, display unit 740, sensor 750, audio circuitry 760, WiFi module 770, processor 780, and power supply 790. In the above embodiment, the in-vehicle terminal may be used as a massage device or a target device. Those skilled in the art will appreciate that the in-vehicle terminal structure shown in fig. 7 does not constitute a limitation of the in-vehicle terminal, and may include more or less components than those shown, or combine some components, or a different arrangement of components.
The respective constituent elements of the in-vehicle terminal will be described below with reference to fig. 7:
the RF circuit 710 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 780; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 710 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 710 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing of the in-vehicle terminal by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the in-vehicle terminal, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the in-vehicle terminal. Specifically, the input unit 730 may include a touch panel 731 and other input devices 732. The touch panel 731, also referred to as a touch screen, can collect touch operations of a user (e.g. operations of the user on or near the touch panel 731 by using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 731 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 780, and can receive and execute commands from the processor 780. In addition, the touch panel 731 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 730 may include other input devices 732 in addition to the touch panel 731. In particular, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 740 may be used to display information input by the user or information provided to the user, and various menus of the in-vehicle terminal. The Display unit 740 may include a Display panel 741, and optionally, the Display panel 741 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 731 can cover the display panel 741, and when the touch panel 731 detects a touch operation on or near the touch panel 731, the touch operation is transmitted to the processor 780 to determine the type of the touch event, and then the processor 780 provides a corresponding visual output on the display panel 741 according to the type of the touch event. Although the touch panel 731 and the display panel 741 are implemented as two separate components in fig. 7 to implement the input and output functions of the in-vehicle terminal, in some embodiments, the touch panel 731 and the display panel 741 may be integrated to implement the input and output functions of the in-vehicle terminal.
The in-vehicle terminal may further include at least one sensor 750, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 741 and/or a backlight when the in-vehicle terminal is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the vehicle-mounted terminal, and related functions (such as pedometer and tapping) for vibration recognition; other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer and an infrared sensor which can be configured on the vehicle-mounted terminal are omitted for description.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and the vehicle terminal. The audio circuit 750 may transmit the electrical signal converted from the received audio data to the speaker 761, and convert the electrical signal into an audio signal by the speaker 761 for output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, processes the audio data by the audio data output processor 780, and transmits the processed audio data to, for example, another vehicle-mounted terminal via the RF circuit 710, or outputs the audio data to the memory 720 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the vehicle-mounted terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 770, and provides wireless broadband internet access for the user. Although fig. 7 shows the WiFi module 770, it is understood that it does not belong to the essential constitution of the in-vehicle terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 780 is a control center of the in-vehicle terminal, connects various parts of the entire in-vehicle terminal using various interfaces and lines, and performs various functions of the in-vehicle terminal and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby performing overall monitoring of the in-vehicle terminal. Optionally, processor 780 may include one or more processing units; preferably, the processor 780 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The vehicle terminal also includes a power source 790 (e.g., a battery) for providing power to the various components, and preferably, the power source may be logically coupled to the processor 780 via a power management system, such that the power management system may perform functions of managing charging, discharging, and power consumption.
Although not shown, the in-vehicle terminal may further include a camera, a bluetooth module, and the like, which are not described herein.
The embodiment of the application also discloses a computer readable storage medium which stores a computer program, wherein the computer program realizes the method in the embodiment of the method when being executed by a processor.
The embodiment of the application also discloses a computer program product, which comprises a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to make a computer execute the method in the method embodiment.
The embodiment of the application also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is enabled to execute the method in the method embodiment.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The method, the device, the camera module, the terminal device and the storage medium for detecting the environmental traffic area disclosed in the embodiment of the present application are described by way of example, a principle and an implementation manner of the present application are explained by applying an example, and the description of the embodiment is only used for helping to understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. The method for detecting the environment passing area is applied to a vehicle-mounted terminal and comprises the following steps:
acquiring a bird's-eye view image of the surrounding environment of a vehicle, identifying an obstacle in the bird's-eye view image, and determining a position area of the obstacle in the bird's-eye view image;
acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment;
acquiring position coordinates of the millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image;
and re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area.
2. The environmental passage area detection method according to claim 1, wherein the newly dividing the impassable area and the passable area of the bird's eye view image according to the positional relationship between the position coordinates and the position area comprises:
when the position coordinates comprise position coordinates outside the position area, clustering the position coordinates according to the distance between the position coordinates to obtain a clustering result;
acquiring a target category from the clustering result, wherein the target category comprises a category of position coordinates outside the position area;
and re-dividing the impassable area and the passable area of the aerial view image according to the position coordinates in the target category.
3. The environmental traffic area detection method according to claim 2, wherein the newly dividing the unvaryable area and the passable area of the bird's eye view image according to the position coordinates in the object category includes:
acquiring a first line segment according to a position coordinate in a first category, wherein the first category is any one of the target categories;
acquiring a target extension line according to the first line segment;
and re-dividing the impassable area and the passable area of the aerial view image according to the first line segment and the target extension line.
4. The environmental traffic area detection method of claim 3, wherein the obtaining of the target extension line according to the first line segment comprises:
determining a shooting area of a first end point in the aerial view image, wherein the first end point is any one end point in the first line segment;
determining a camera corresponding to the shooting area and a setting position of the camera relative to a vehicle body;
and extending the first end point along a direction matched with the setting position to generate the target extension line.
5. The environmental passage area detection method according to claim 1, wherein the newly dividing the impassable area and the passable area of the bird's eye view image according to the positional relationship between the position coordinates and the position area comprises:
when the position coordinates coincide with the edge coordinates of the position area, determining an area boundary formed by the edge coordinates of the position area;
dividing the impassable area and the passable area of the aerial view image according to the area boundary.
6. The environmental traffic area detection method according to claim 1, wherein the identifying an obstacle in the bird's eye view image and determining a position area of the obstacle in the bird's eye view image includes:
acquiring an object label of an obstacle by identifying the obstacle in the aerial view image;
and determining the position area of the obstacle in the aerial view image according to the object label of the obstacle.
7. The environmental traffic area detection method according to claim 6, wherein the location area includes a first area, and the first area does not include any one of the location coordinates; the newly dividing the unviable area and the passable area of the bird's-eye view image according to the position relationship between the position coordinates and the unviable area comprises the following steps:
acquiring an object label corresponding to the first area after the aerial view image is identified;
when the object label is a preset label, calibrating the first area according to the boundary of the first area, and dividing the first area into the impassable area;
when the object tag is not the preset tag, dividing the first area into the passable area.
8. The environment passage area detection method according to claims 1 to 7, wherein before the acquiring the millimeter wave data collected by the millimeter wave radar of the vehicle-mounted terminal on the surrounding environment, the method further comprises:
acquiring a third acquisition frequency according to the first acquisition frequency of the millimeter wave radar and the second acquisition frequency of the camera;
the acquiring of the millimeter wave data collected by the millimeter wave radar of the vehicle-mounted terminal to the surrounding environment comprises the following steps:
acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment according to the third acquisition frequency;
the acquiring of the bird's eye view image of the surroundings of the vehicle comprises:
and acquiring the aerial view image of the surrounding environment of the vehicle according to the third acquisition frequency.
9. The method according to any one of claims 1 to 7, wherein the millimeter wave data further includes speed information of an environmental object corresponding to the millimeter wave data relative to the vehicle, and before the acquiring the position coordinates of the millimeter wave data in the target coordinate system, the method further includes:
and screening the millimeter wave data acquired by the millimeter wave radar according to the speed information.
10. The method of claim 9, wherein the filtering the millimeter wave data according to the speed information comprises:
acquiring historical millimeter wave data acquired by the millimeter wave radar in at least one acquisition period adjacent to the current acquisition period;
acquiring a speed variance according to the speed information of the current acquisition period and the speed information in the historical millimeter wave data;
and screening the millimeter wave data acquired in the current acquisition period according to the speed variance.
11. The method of claim 9, wherein the filtering the millimeter wave data according to the speed information comprises:
and screening the millimeter wave data acquired by the millimeter wave radar according to the speed information and a preset speed threshold.
12. The utility model provides an environmental traffic area detection device, its characterized in that, the device is applied to in-vehicle terminal, the device includes:
the device comprises an area determination module, a position determination module and a display module, wherein the area determination module is used for acquiring a bird's-eye view image of the surrounding environment of a vehicle, identifying an obstacle in the bird's-eye view image and determining the position area of the obstacle in the bird's-eye view image;
the data acquisition module is used for acquiring millimeter wave data acquired by a millimeter wave radar of the vehicle-mounted terminal to the surrounding environment;
the coordinate acquisition module is used for acquiring the position coordinates of the millimeter wave data in a target coordinate system, wherein the target coordinate system is a pixel coordinate system of the aerial view image;
and the image dividing module is used for re-dividing the impassable area and the passable area of the aerial view image according to the position relation between the position coordinates and the position area.
13. An in-vehicle terminal, characterized by comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement the environmental passage area detection method according to any one of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method for detecting an environmental passage area according to any one of claims 1 to 11.
CN202110303848.1A 2021-03-22 2021-03-22 Environment passing area detection method and device, vehicle-mounted terminal and storage medium Withdrawn CN113076830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110303848.1A CN113076830A (en) 2021-03-22 2021-03-22 Environment passing area detection method and device, vehicle-mounted terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110303848.1A CN113076830A (en) 2021-03-22 2021-03-22 Environment passing area detection method and device, vehicle-mounted terminal and storage medium

Publications (1)

Publication Number Publication Date
CN113076830A true CN113076830A (en) 2021-07-06

Family

ID=76613140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110303848.1A Withdrawn CN113076830A (en) 2021-03-22 2021-03-22 Environment passing area detection method and device, vehicle-mounted terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113076830A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445593A (en) * 2022-01-30 2022-05-06 重庆长安汽车股份有限公司 Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445593A (en) * 2022-01-30 2022-05-06 重庆长安汽车股份有限公司 Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing
CN114445593B (en) * 2022-01-30 2024-05-10 重庆长安汽车股份有限公司 Bird's eye view semantic segmentation label generation method based on multi-frame semantic point cloud splicing

Similar Documents

Publication Publication Date Title
CN110147705B (en) Vehicle positioning method based on visual perception and electronic equipment
US20200348408A1 (en) Vehicle Positioning Method and Vehicle Positioning Apparatus
WO2020147485A1 (en) Information processing method, system and equipment, and computer storage medium
CN109785368B (en) Target tracking method and device
CN111427979B (en) Dynamic map construction method, system and medium based on laser radar
CN112256589B (en) Simulation model training method and point cloud data generation method and device
US9409644B2 (en) Automotive drone deployment system
CN110019580A (en) Map-indication method, device, storage medium and terminal
CN109977845B (en) Driving region detection method and vehicle-mounted terminal
CN109840454B (en) Target positioning method, device, storage medium and equipment
CN112052778B (en) Traffic sign identification method and related device
CN112949782A (en) Target detection method, device, equipment and storage medium
CN109040968A (en) Road conditions based reminding method, mobile terminal and computer readable storage medium
CN112654892A (en) Method for creating a map of an environment of a vehicle
CN112477873A (en) Auxiliary driving and vehicle safety management system based on Internet of vehicles
CN114155497A (en) Object identification method and device and storage medium
CN111856499A (en) Map construction method and device based on laser radar
CN112595728B (en) Road problem determination method and related device
CN113076830A (en) Environment passing area detection method and device, vehicle-mounted terminal and storage medium
CN108881846B (en) Information fusion method and device and computer readable storage medium
CN113923775A (en) Method, device, equipment and storage medium for evaluating quality of positioning information
CN108230680B (en) Vehicle behavior information acquisition method and device and terminal
CN109683155A (en) Sensor fusion system, method, terminal and storage medium
CN111274336B (en) Target track processing method and device, storage medium and electronic device
CN110046569B (en) Unmanned driving data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210706