CN116452522A - Parking space detection method, device, equipment, medium, program product and vehicle - Google Patents

Parking space detection method, device, equipment, medium, program product and vehicle Download PDF

Info

Publication number
CN116452522A
CN116452522A CN202310318629.XA CN202310318629A CN116452522A CN 116452522 A CN116452522 A CN 116452522A CN 202310318629 A CN202310318629 A CN 202310318629A CN 116452522 A CN116452522 A CN 116452522A
Authority
CN
China
Prior art keywords
vehicle
image frame
bev
parking space
space detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310318629.XA
Other languages
Chinese (zh)
Inventor
房慧娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310318629.XA priority Critical patent/CN116452522A/en
Publication of CN116452522A publication Critical patent/CN116452522A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure relates to a parking space detection method, a device, equipment, a medium, a program product and a vehicle. The parking space detection method comprises the following steps: acquiring environment information of a vehicle at the current moment and a first aerial view BEV image frame of the vehicle at the last moment; wherein the environmental information includes a looking-around image frame around the vehicle and vehicle pose information of the vehicle; determining a second BEV image frame according to the ring image frame, the vehicle pose information of the vehicle and the first BEV image frame; and carrying out parking space detection based on the second BEV image frame. By adopting the parking space detection method provided by the embodiment of the invention, more complete data basis can be provided for vehicle parking, and vehicle parking is facilitated, so that the parking accuracy, safety and parking efficiency can be improved.

Description

Parking space detection method, device, equipment, medium, program product and vehicle
Technical Field
The disclosure relates to the technical field of automatic driving, in particular to a parking space detection method, a device, equipment, a medium, a program product and a vehicle.
Background
In the related art, parking space detection is a basic part of environment perception of an automatic parking system. The parking space detection in the cruising stage can completely acquire the parking space information of the parking space for parking space detection due to less shielding. In the parking stage, the parking space detection result is not complete enough due to the shielding of the vehicle, so that the automatic parking of the vehicle can be influenced.
Disclosure of Invention
The disclosure provides a parking space detection method, a device, equipment, a medium, a program product and a vehicle, which at least solve the technical problem that the automatic parking of the vehicle is affected due to incomplete parking space detection results in the related technology. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a parking space detection method, including:
acquiring environment information of a vehicle at the current moment and a first aerial view BEV image frame of the vehicle at the last moment; wherein the environmental information includes a looking-around image frame around the vehicle and vehicle pose information of the vehicle;
determining a second BEV image frame according to the ring image frame, the vehicle pose information of the vehicle and the first BEV image frame;
and carrying out parking space detection based on the second BEV image frame.
In one possible implementation, the determining a second BEV image frame from the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame includes:
and performing splicing processing on the environment image frame, the vehicle pose information and the first BEV image frame to obtain the second BEV image frame.
In one possible embodiment, the panoramic image frame comprises at least four lateral annular image frames around the vehicle, the environmental image frames being acquired by a camera disposed on the vehicle; the imaging devices are respectively arranged on four side surfaces of the vehicle, and at least one imaging device is arranged on each side surface.
In one possible embodiment, the method further comprises:
the second BEV image frame is stored.
In one possible embodiment, the method further comprises:
the first BEV image frame is deleted.
According to a second aspect of the embodiments of the present disclosure, there is provided a parking space detection device, including:
the acquisition module is used for acquiring the environmental information of the vehicle at the current moment and a first aerial view BEV image frame of the vehicle at the last moment; wherein the environmental information includes a looking-around image frame around the vehicle and vehicle pose information of the vehicle;
a determining module configured to determine a second BEV image frame based on the ring image frame, vehicle pose information of the vehicle, and the first BEV image frame;
and the detection module is used for carrying out parking space detection based on the second BEV image frame.
In a possible implementation manner, the determining module is configured to:
and performing splicing processing on the environment image frame, the vehicle pose information and the first BEV image frame to obtain the second BEV image frame.
In one possible embodiment, the panoramic image frame comprises at least four lateral annular image frames around the vehicle, the environmental image frames being acquired by a camera disposed on the vehicle; the imaging devices are respectively arranged on four side surfaces of the vehicle, and at least one imaging device is arranged on each side surface.
In one possible embodiment, the method further comprises:
and a storage module for storing the second BEV image frame.
In one possible embodiment, the method further comprises:
and the deleting module is used for deleting the first BEV image frame.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the parking spot detection method according to any one of the first aspects.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the parking space detection method according to any one of the first aspects.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the stall detection method of any of the first aspects.
According to a sixth aspect of embodiments of the present disclosure, there is provided a vehicle, including the parking space detection device as in any one of the second aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in the embodiment of the disclosure, environmental information of a vehicle at the current moment is acquired, and a first aerial view BEV image frame of the vehicle at the last moment is acquired; the environment information comprises an annular image frame around the vehicle and vehicle pose information of the vehicle; determining a second BEV image frame according to the ring image frame, the vehicle pose information of the vehicle and the first BEV image frame; and carrying out parking space detection based on the second BEV image frame. In this way, since the vehicle attitude information of the vehicle can characterize the position and attitude of the vehicle, the ring image frame can characterize the information around the vehicle, and the first BEV image frame at the previous time includes the information of the position covered by the vehicle body at the previous time, that is, the missing part of the image information in the second BEV image. Thus, based on the vehicle pose information, the ring image frame, and the first BEV image frame of the previous time, the determined second BEV image may fully characterize the image information of the surrounding of the vehicle and the bottom of the vehicle at the current time. Therefore, a more complete data basis can be provided for vehicle parking, and the vehicle parking is facilitated, so that the accuracy, safety and parking efficiency of parking can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flow chart illustrating a method of parking spot detection according to an exemplary embodiment.
Fig. 2 is a schematic diagram showing a setting position of an image pickup apparatus according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating another parking space detection method according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a parking space detection device according to an exemplary embodiment.
Fig. 5 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The following describes in detail a parking space detection method, device, equipment, medium, program product and vehicle provided by the embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a parking space detection method according to an exemplary embodiment, which may be applied to a vehicle, for example, a vehicle controller or a parking space detection device provided on the vehicle, or the like. As shown in fig. 1, the method may include the following steps.
In step S101, environmental information of the vehicle at the current time and a first bird' S eye view BEV image frame of the vehicle at the previous time are acquired.
The environment information comprises an annular image frame around the vehicle and vehicle pose information of the vehicle. The vehicle pose information may include, for example, a wheel speed of the vehicle, a vehicle heading angle, a tire steering angle, a tire side slip angle, etc., or the vehicle pose information may include only one or more of the foregoing, or may further include other parameters representing the vehicle pose information.
In the embodiment of the disclosure, when parking space detection is performed, environmental information of the vehicle at the current moment can be obtained, for example, a looking-around image frame and vehicle pose information of the vehicle at the current moment can be obtained; the ring image frame may be an image frame around the vehicle, and a Bird's Eye View image of the vehicle at the previous time, that is, a first BEV (Bird's Eye View) image frame may also be acquired. Wherein the last time instant may be the time instant at which the first BEV image frame was acquired closest to and before the current time instant. Taking the example of a current time of 10:00, the ring image frame and the BEV image frame of the previous time are acquired once per second, the first BEV image frame may be the BEV image acquired at 09:59. It is understood that the first BEV image may also be an image frame derived based on the ring image frame at the previous moment and the BEV image frame at the previous moment. For example, still taking the current time as 10:00 as an example, the first BEV image is a BEV image frame of 09:59 at a time previous to the current time, and the BEV image frame of 09:59 at a time previous to the current time is 09:58, which may be obtained based on the looking-around image frame of 09:59 and the BEV image frame of 09:58.
In step S102, a second BEV image frame is determined from the ring image frame, vehicle pose information of the vehicle, and the first BEV image frame.
In the embodiment of the disclosure, after acquiring the ring image frame of the vehicle at the current time, the vehicle pose information of the vehicle, and the first bird's eye view BEV image frame of the vehicle at the previous time, the BEV image frame at the current time, that is, the second BEV image frame, may be determined according to the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame. That is, the second BEV image frame is obtained based on the looking-around image frame at the present time, the vehicle pose information, and the first BEV image frame at the previous time. It will be appreciated that since the BEV image at each instant is derived based on the looking-around image frame at the current instant, the vehicle pose information, and the BEV image frame at the previous instant, the BEV image frame at each instant derived based thereon includes both the ring image frame around the vehicle and the image frame of the vehicle body covered location. In this way, the second BEV image frame is made more complete.
In step S103, a parking space detection is performed based on the second BEV image frame.
In an embodiment of the present disclosure, after determining the second BEV image frame, parking spot detection may be performed based on the second BEV image. It can be appreciated that the implementation manner of parking space detection based on the second BEV image is similar to that in the related art, and will not be described herein.
In the embodiment of the disclosure, environmental information of a vehicle at the current moment is acquired, and a first aerial view BEV image frame of the vehicle at the last moment is acquired; the environment information comprises an annular image frame around the vehicle and vehicle pose information of the vehicle; determining a second BEV image frame according to the ring image frame, the vehicle pose information of the vehicle and the first BEV image frame; and carrying out parking space detection based on the second BEV image frame. In this way, since the vehicle attitude information of the vehicle can characterize the position and attitude of the vehicle, the ring image frame can characterize the information around the vehicle, and the first BEV image frame at the previous time includes the information of the position covered by the vehicle body at the previous time, that is, the missing part of the image information in the second BEV image. Thus, based on the vehicle pose information, the ring image frame, and the first BEV image frame of the previous time, the determined second BEV image may fully characterize the image information of the surrounding of the vehicle and the bottom of the vehicle at the current time. Therefore, a more complete data basis can be provided for vehicle parking, and the vehicle parking is facilitated, so that the accuracy, safety and parking efficiency of parking can be improved.
In one possible implementation manner, in the step, a specific implementation manner of the second BEV image frame is determined according to the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame:
and performing splicing processing on the environment image frame, the vehicle pose information and the first BEV image frame to obtain a second BEV image frame.
In the embodiment of the disclosure, when determining the second BEV image frame according to the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame, the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame may be subjected to stitching, and the BEV image frame obtained by the stitching is determined as the second BEV image frame. For example, the first BEV image at the previous time may be combined with the ring image frame at the current time and the vehicle pose information of the vehicle to perform a stitching process, for example, the ring image frame at the current time and the vehicle pose information may be subjected to a rendering process by using the first BEV image frame, and a partial image missing at the current time may be projected from the BEV image frame at the previous time to the BEV image frame at the current time to obtain the second BEV image. It can be appreciated that the specific implementation method of the stitching process for the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame is similar to the stitching method in the related art. Therefore, the second BEV image frame can be more comprehensive and complete, so that a more complete data basis can be provided for parking space detection, and the accuracy of parking space detection is improved.
In some possible embodiments, the panoramic image frames comprise at least four lateral panoramic image frames of at least the periphery of the vehicle, the ambient image frames being acquired by a camera disposed on the vehicle; the image pick-up devices are respectively arranged on four sides of the vehicle, and at least one image pick-up device is arranged on each side.
In the embodiment of the disclosure, the ring-view image frame may be acquired by a camera device disposed on the vehicle, and the camera device may be, for example, a fisheye camera, or other camera devices that may implement ring-view image acquisition. For example, the image capturing devices may be disposed on at least four sides of the vehicle, and at least one image capturing device may be disposed on each side of the vehicle, and the image capturing device disposed on each side is configured to capture an annular image frame of the side. Referring to fig. 2, fig. 2 illustrates a schematic view of an installation position of one image pickup device, which is taken as an example, where each side of the vehicle is provided with one image pickup device, and it can be understood that two or more image pickup devices may be provided on each side, and the number of image pickup devices provided on different sides may be the same or different; the embodiment of the disclosure can take a fisheye image (namely, a look-around image frame), vehicle pose information and dead reckoning acquired by a four-way fisheye camera arranged on a vehicle body as input, and the fisheye image and the dead reckoning information are utilized to supplement a mapped BEV image, and the image after supplementation can comprise image information lost due to vehicle shielding. In this way, the image pick-up devices are arranged on at least four sides of the vehicle, so that each side of the vehicle can be ensured to acquire the annular image frames.
In some possible implementations, the method provided by the embodiments of the present disclosure may further include the following processes:
the second BEV image frame is stored.
In the embodiment of the disclosure, the second BEV image may also be stored after the second BEV image is obtained or after the parking space detection is performed. The stored second BEV image can be used for combining the ring image frame at the next moment and the vehicle pose information to splice the BEV image at the first moment for parking space detection. Therefore, a real-time data basis can be provided for subsequent parking space detection, and the real-time performance and accuracy of a parking space detection result are improved.
In some possible implementations, the method provided by the embodiments of the present disclosure may further include the following processes:
the first BEV image frame is deleted.
In the embodiment of the present disclosure, after the second BEV image is obtained, or after the parking space detection is performed, the first BEV image frame at the previous moment may also be deleted. That is, the BEV image at the previous time may be deleted after each time the latest BEV image at the current time is obtained or after each time the parking space detection is performed. Therefore, the old data is deleted in time, so that more memory resources can be prevented from being occupied, the memory occupancy rate is reduced, the situation that the parking space detection fails due to insufficient memory caused by the storage of the old data is avoided, and the success rate of the parking space detection is improved.
In order to make the parking space detection method provided by the embodiment of the present disclosure clearer, the parking space detection method provided by the embodiment of the present disclosure is described below with reference to fig. 3. As shown in fig. 3, the parking space detection method may include the following processes:
in step S301, a looking-around image frame and vehicle pose information of the vehicle at the current time and a first BEV image frame of the vehicle at the previous time are acquired.
In step S302, a stitching process is performed on the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame, and a second BEV image frame is determined.
In step S303, a parking space detection is performed based on the second BEV image frame.
In step S304, the second BEV image frame is stored and the first BEV image frame is deleted.
In the embodiment of the disclosure, the fisheye images (namely, looking around image frames) and vehicle pose information and dead reckoning acquired by the four-way fisheye cameras arranged on the vehicle body are used as inputs, and the fisheye images, the vehicle pose information and the dead reckoning information are utilized to supplement the mapped BEV images, so that the problem that the vehicle is unstable in bin position detection state, missed detection and the like due to incomplete BEV images in the parking process caused by shielding of the vehicle in the moving process is solved, the stability of bin position detection is improved, the original topological structure of the bin position is ensured, convenience is provided for subsequent fusion, and more accurate data support is provided for automatic parking. According to the method and the device, original model prediction and training data are not required to be modified, and parking space detection can be achieved only by rendering input information.
The specific implementation manner of each step in this embodiment is similar to that of the above-mentioned method embodiment, and will not be described herein.
Based on the same inventive concept, the embodiment of the present disclosure further provides a parking space detection device, as shown in fig. 4, and fig. 4 is a block diagram of a parking space detection device according to an exemplary embodiment. Referring to fig. 4, the parking space detecting apparatus 400 may include:
an obtaining module 410, configured to obtain environmental information of a vehicle at a current time, and a first bird's eye view BEV image frame of the vehicle at a previous time; wherein the environmental information includes a looking-around image frame around the vehicle and vehicle pose information of the vehicle;
a determining module 420 configured to determine a second BEV image frame based on the ring image frame, vehicle pose information of the vehicle, and the first BEV image frame;
and the detection module 430 is configured to perform parking space detection based on the second BEV image frame.
In some possible embodiments, the determining module 420 is configured to:
and performing splicing processing on the environment image frame, the vehicle pose information and the first BEV image frame to obtain the second BEV image frame.
In some possible embodiments, the panoramic image frame comprises at least four lateral annular image frames around the vehicle, the environmental image frames being acquired by a camera disposed on the vehicle; the imaging devices are respectively arranged on four side surfaces of the vehicle, and at least one imaging device is arranged on each side surface.
In some possible embodiments, the method further comprises:
and a storage module for storing the second BEV image frame.
In some possible embodiments, the method further comprises:
and the deleting module is used for deleting the first BEV image frame.
The specific manner and technical effects of the operations performed by the respective modules in the apparatus of the above embodiments have been described in detail in the embodiments related to the method, and will not be described in detail herein.
Based on the same inventive concept, the embodiment of the present disclosure further provides a vehicle, including any one of the parking space detection devices shown in the foregoing embodiment. The specific manner and technical effects of the operations performed thereof have been described in detail in connection with the embodiments of the method, and will not be explained in detail herein.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a storage medium and a computer program product.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic device 500 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the electronic device 500 includes a computing unit 501 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM502, and RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in electronic device 500 are connected to I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the electronic device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as the parking space detection method. For example, in some embodiments, the parking spot detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 500 via the ROM502 and/or the communication unit 509. When a computer program is loaded into RAM503 and executed by computing unit 501, one or more steps of the stall detection method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the parking spot detection method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for a computer program product for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The storage medium may be a machine-readable signal medium or a machine-readable storage medium. The storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. The parking space detection method is characterized by comprising the following steps of:
acquiring environment information of a vehicle at the current moment and a first aerial view BEV image frame of the vehicle at the last moment; wherein the environmental information includes a looking-around image frame around the vehicle and vehicle pose information of the vehicle;
determining a second BEV image frame according to the ring image frame, the vehicle pose information of the vehicle and the first BEV image frame;
and carrying out parking space detection based on the second BEV image frame.
2. The parking space detection method according to claim 1, wherein the determining a second BEV image frame from the ring image frame, the vehicle pose information of the vehicle, and the first BEV image frame includes:
and performing splicing processing on the environment image frame, the vehicle pose information and the first BEV image frame to obtain the second BEV image frame.
3. The parking space detection method according to claim 1, wherein the looking-around image frame includes at least four side-around image frames around at least the vehicle, the environmental image frames being acquired by a camera provided on the vehicle; the imaging devices are respectively arranged on four side surfaces of the vehicle, and at least one imaging device is arranged on each side surface.
4. The parking space detection method according to claim 1, further comprising:
the second BEV image frame is stored.
5. The parking space detection method according to claim 1, further comprising:
the first BEV image frame is deleted.
6. The utility model provides a parking stall detection device which characterized in that includes:
the acquisition module is used for acquiring the environmental information of the vehicle at the current moment and a first aerial view BEV image frame of the vehicle at the last moment; wherein the environmental information includes a looking-around image frame around the vehicle and vehicle pose information of the vehicle;
a determining module configured to determine a second BEV image frame based on the ring image frame, vehicle pose information of the vehicle, and the first BEV image frame;
and the detection module is used for carrying out parking space detection based on the second BEV image frame.
7. The parking spot detection device of claim 6, wherein the determination module is configured to:
and performing splicing processing on the environment image frame, the vehicle pose information and the first BEV image frame to obtain the second BEV image frame.
8. The parking spot detection apparatus according to claim 6, wherein the looking-around image frame includes at least four side-around image frames around at least the vehicle, the environmental image frames being acquired by a camera provided on the vehicle; the imaging devices are respectively arranged on four side surfaces of the vehicle, and at least one imaging device is arranged on each side surface.
9. The parking spot detection device according to claim 6, further comprising:
and a storage module for storing the second BEV image frame.
10. The parking spot detection device according to claim 6, further comprising:
and the deleting module is used for deleting the first BEV image frame.
11. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the parking spot detection method of any one of claims 1 to 5.
12. A storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the parking space detection method of any one of claims 1 to 5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the parking spot detection method according to any one of claims 1 to 5.
14. A vehicle comprising a parking space detection apparatus as claimed in any one of claims 6 to 10.
CN202310318629.XA 2023-03-28 2023-03-28 Parking space detection method, device, equipment, medium, program product and vehicle Pending CN116452522A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310318629.XA CN116452522A (en) 2023-03-28 2023-03-28 Parking space detection method, device, equipment, medium, program product and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310318629.XA CN116452522A (en) 2023-03-28 2023-03-28 Parking space detection method, device, equipment, medium, program product and vehicle

Publications (1)

Publication Number Publication Date
CN116452522A true CN116452522A (en) 2023-07-18

Family

ID=87121232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310318629.XA Pending CN116452522A (en) 2023-03-28 2023-03-28 Parking space detection method, device, equipment, medium, program product and vehicle

Country Status (1)

Country Link
CN (1) CN116452522A (en)

Similar Documents

Publication Publication Date Title
CN111209978B (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN112862877B (en) Method and apparatus for training an image processing network and image processing
CN112634343A (en) Training method of image depth estimation model and processing method of image depth information
CN113011323B (en) Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN113436100B (en) Method, apparatus, device, medium, and article for repairing video
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN110675635A (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
JP2022050311A (en) Method for detecting lane change of vehicle, system, electronic apparatus, storage medium, roadside machine, cloud control platform, and computer program
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN112597895A (en) Confidence determination method based on offset detection, road side equipment and cloud control platform
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN115965939A (en) Three-dimensional target detection method and device, electronic equipment, medium and vehicle
CN115861755A (en) Feature fusion method and device, electronic equipment and automatic driving vehicle
CN116452522A (en) Parking space detection method, device, equipment, medium, program product and vehicle
CN115937822A (en) Positioning and mapping method and device, electronic equipment and readable storage medium
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
CN110910312A (en) Image processing method and device, automatic driving vehicle and electronic equipment
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN113591847B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN116229209B (en) Training method of target model, target detection method and device
CN112991179B (en) Method, apparatus, device and storage medium for outputting information
CN112700657B (en) Method and device for generating detection information, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination