CN113076896A - Standard parking method, system, device and storage medium - Google Patents

Standard parking method, system, device and storage medium Download PDF

Info

Publication number
CN113076896A
CN113076896A CN202110386027.9A CN202110386027A CN113076896A CN 113076896 A CN113076896 A CN 113076896A CN 202110386027 A CN202110386027 A CN 202110386027A CN 113076896 A CN113076896 A CN 113076896A
Authority
CN
China
Prior art keywords
parking
target vehicle
image
orientation
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110386027.9A
Other languages
Chinese (zh)
Inventor
应云剑
万鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qisheng Technology Co Ltd
Original Assignee
Beijing Qisheng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qisheng Technology Co Ltd filed Critical Beijing Qisheng Technology Co Ltd
Priority to CN202110386027.9A priority Critical patent/CN113076896A/en
Publication of CN113076896A publication Critical patent/CN113076896A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a standard parking method. The method may include acquiring a sequence of images of a target vehicle before parking; determining a moving track of the target vehicle before parking based on the image sequence; determining the parking direction of the target vehicle based on the moving track of the target vehicle before parking; and determining whether the parking of the target vehicle is in compliance with the specification based on the parking orientation of the target vehicle.

Description

Standard parking method, system, device and storage medium
Technical Field
The present application relates to the field of intelligent transportation, and in particular, to a method, a system, an apparatus, and a storage medium for standardized parking.
Background
With the rapid development of the internet, shared vehicles (e.g., shared bicycles) are widely available as a new rental model. With the putting of a large number of shared bicycles and electric bicycles, the disordered putting leads to the disordered stacking of a large number of vehicles, which has certain influence on the appearance and the appearance of the city. Therefore, it is desirable to provide a reasonable method and system for regulating parking to make the shared bicycle and the electric bicycle more orderly and efficient for service users.
Disclosure of Invention
One aspect of the present application provides a normative parking method. The method may include acquiring a sequence of images of a target vehicle before parking; determining a moving track of the target vehicle before parking based on the image sequence; determining a parking orientation of the target vehicle based on a moving track of the target vehicle before parking; and determining whether the parking of the target vehicle is in compliance with a specification based on the parking orientation of the target vehicle.
In some embodiments, the determining whether the parking of the target vehicle is normative based on the parking orientation of the target vehicle may include comparing the parking orientation of the target vehicle with a positional orientation of a parking area, and determining that the parking of the target vehicle is normative when the parking orientation of the target vehicle is within a set threshold angle of difference from the positional orientation of the parking area.
In some embodiments, the method may further include acquiring at least one first image of the sequence of images that includes parking area information; and identifying the position orientation of the parking area by using an image identification model based on the at least one first image.
In some embodiments, the method may further comprise performing at least one of the following when the target vehicle is determined to be out of specification: sending a vehicle locking prohibition instruction to the target vehicle; controlling the target vehicle to broadcast a first prompt tone; or sending first prompt information to a user terminal corresponding to the target vehicle.
In some embodiments, the method may further include acquiring a second image of the target vehicle after parking; and determining whether the target vehicle is within a parking area based on the second image.
In some embodiments, the determining whether the target vehicle is within a parking area based on the second image comprises determining whether the target vehicle has a parking frame line around based on the second image; and when the periphery of the target vehicle comprises at least two parking frame lines, determining that the target vehicle is located in the parking area.
In some embodiments, the method may further comprise: when the target vehicle is not within the parking area, performing at least one of: sending a vehicle locking prohibition instruction to the target vehicle; controlling the target vehicle to broadcast a second prompt tone; or sending second prompt information to a user terminal corresponding to the target vehicle.
Another aspect of the present application provides a normative parking system. The system may include an acquisition module, a movement trajectory determination module, a parking orientation determination module, and a parking specification determination module. The acquisition module may be configured to acquire a sequence of images of a target vehicle before parking. The movement trajectory determination module may be configured to determine a movement trajectory of the target vehicle before parking based on the sequence of images. The parking orientation determination module may be configured to determine a parking orientation of the target vehicle based on a movement trajectory of the target vehicle before parking. The parking specification determination module may be configured to determine whether parking of the target vehicle is in specification based on the parking orientation of the target vehicle.
Another aspect of the present application provides a normative parking apparatus. The apparatus includes at least one processor and at least one memory. The at least one memory may be used to store computer instructions. The at least one processor may be configured to execute at least some of the computer instructions to implement the canonical parking method described above.
Another aspect of the present application provides a computer-readable storage medium. The storage medium may store computer instructions. The computer instructions, when executed by a processor, may implement the canonical parking method described above.
Drawings
The present application will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a diagram of an exemplary application scenario for a canonical parking system shown in accordance with some embodiments of the application;
FIG. 2 is a block diagram of an exemplary processing device shown in accordance with some embodiments of the present application;
FIG. 3 is an exemplary flow chart of a normative parking method shown in accordance with some embodiments of the present application;
FIG. 4 is an exemplary flow chart illustrating the determination of whether a target vehicle is within a parking area according to some embodiments of the present application;
FIG. 5 is a schematic illustration of a target vehicle's movement trajectory and parking area, shown in accordance with some embodiments of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the application, from which the application can also be applied to other similar scenarios without inventive effort for a person skilled in the art. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The embodiment of the application can be applied to different shared traffic service systems. Such as a human powered vehicle, a vehicle (e.g., a bicycle, an electric bicycle, etc.), an automobile (e.g., a small car, a bus, a large transportation vehicle, etc.), an unmanned vehicle, etc. The application scenarios of the different embodiments of the present application include but are not limited to one or a combination of several of transportation industry, warehouse logistics industry, agricultural operation system, urban public transportation system, commercial operation shared vehicles, etc. It should be understood that the application scenarios of the system and method of the present application are merely examples or embodiments of the present application, and those skilled in the art can also apply the present application to other similar scenarios without inventive effort based on these drawings. Such as other similar tracked vehicles.
FIG. 1 is a schematic illustration of an application scenario of an exemplary canonical parking system according to some embodiments of the application.
The normative parking system 100 may acquire image data of the target vehicle before parking and determine a moving trajectory and a parking orientation of the target vehicle before parking based on the image data, thereby determining whether parking of the target vehicle conforms to the norm. The canonical parking system 100 may be used for service platforms for the internet or other networks. For example, the canonical parking system 100 may be an online service platform that shares vehicles. As shown in fig. 1, the canonical parking system 100 may include a server 110, a network 120, a terminal device 130, a storage device 140, and a vehicle 150.
In some embodiments, server 110 may be used to process information and/or data related to the regulated parking. For example, the processing device 112 may acquire a sequence of images of the subject vehicle from the vehicle 150 before the subject vehicle is parked. For another example, the processing device 112 may obtain a second image of the target vehicle after parking from the vehicle 150. The server 110 may be a computer server. In some embodiments, the server 110 may be a single server or a group of servers. The server group may be a centralized server group connected to the network 120 via an access point, or a distributed server group respectively connected to the network 120 via one or more access points. In some embodiments, server 110 may be connected locally to network 120 or remotely from network 120. For example, server 110 may access information and/or data stored in terminal device 130 and/or storage device 140 via network 120. As another example, storage device 140 may serve as back-end storage for server 110. In some embodiments, the server 110 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an intermediate cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the server 110 may include a processing device 112. Processing device 112 may process information and/or data related to performing one or more of the functions described in the present application. For example, the processing device 112 may determine a movement trajectory of the target vehicle before parking based on the sequence of images to determine a parking orientation of the target vehicle. For another example, the processing device 112 may determine whether parking of the target vehicle is compliant with a specification based on the parking orientation of the target vehicle. Meanwhile, the processing device 112 may send a prompt to the user based on the parking of the target vehicle. In some embodiments, the processing device 112 may include one or more processing units (e.g., single core processing engines or multiple core processing engines). By way of example only, the processing device 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof.
Network 120 may facilitate the exchange of information and/or data. In some embodiments, one or more components in the canonical parking system 100 (e.g., server 110, terminal device 130, storage device 140, vehicle 150) may send information and/or data to other components in the canonical parking system 100 over the network 120. For example, server 110 may obtain a sequence of images from vehicle 150 via network 120. In some embodiments, the network 120 may be any type or combination of wired or wireless network. By way of example only, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or Internet switching points 120-1, 120-2, etc. One or more components of the canonical parking system 100 may connect to the network 120 through a network access point to exchange data and/or information.
Terminal device 130 may enable user interaction with canonical parking system 100. For example, the user may send a parking request through the terminal device 130. Vehicle 150 may transmit a sequence of images based on the parking request. In some embodiments, the terminal device 130 may also receive the prompting message (e.g., the first prompting message, the second prompting message, etc.) transmitted by the server 110. In some embodiments, the terminal device 130 may include a mobile device 130-1, a tablet 130-2, a laptop 130-3, an automotive built-in device 130-4, or the like, or any combination thereof. In some embodiments, the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart televisions, smart cameras, interphones, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmet, smart watch, smart garment, smart backpack, smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, and the like, or any combination thereof. In some embodiments, virtual reality devices and/or augmentationsThe reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyeshields, augmented reality helmets, augmented reality glasses, augmented reality eyeshields, and the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include a Google GlassTM、Oculus RiftTM、HololensTM、Gear VRTMAnd the like. In some embodiments, terminal device 130 may include a location-enabled device to determine the location of the user and/or terminal device 130.
Storage device 140 may store data and/or instructions. In some embodiments, storage device 140 may store data and/or instructions that server 110 may execute to provide the methods or steps described herein. In some embodiments, storage device 140 may store data associated with vehicle 150, such as a sequence of images, a first image, a second image, etc., associated with vehicle 150. For another example, the storage device 140 may store alert tones (e.g., a first alert tone, a second alert tone) broadcast by the vehicle 150. In some embodiments, one or more components of canonical parking system 100 may access data or instructions stored in storage device 140 via network 120. In some embodiments, storage device 140 may be connected directly to server 110 as back-end storage. In some embodiments, storage device 140 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), etc., or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like. Exemplary removable memory may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read and write memories can include Random Access Memory (RAM). Exemplary RAM may include Dynamic RAM (DRAM), double data rate synchronous dynamic RAM (DDR SDRAM), Static RAM (SRAM), thyristor RAM (T-RAM), zero capacitor RAM (Z-RAM), and the like. Exemplary ROMs may include Mask ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. In some embodiments, the storage device 140 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an intermediate cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the vehicles 150 may include bicycles, electric bicycles, tricycles, minicars, vans, trucks, and the like. In some embodiments, the vehicle 150 may include a private car, a taxi, and the like. In some embodiments, the vehicle 150 may include a manned vehicle and/or an unmanned autonomous vehicle, and the like, and the description is not limited to the type of vehicle 150. In some embodiments, vehicle 150 may include a locating device. In some embodiments, the positioning device may be a Global Positioning System (GPS), global navigation satellite system (GLONASS), COMPASS navigation system (COMPASS), beidou navigation satellite system, galileo positioning system, quasi-zenith satellite system (QZSS), or the like. In some embodiments, vehicle 150 may include one or more image capture devices 151. The image capture device 151 may be configured to obtain image information (e.g., a sequence of images, a first image, a second image) of the target vehicle. In some embodiments, the image capture device may be positioned on a frame, handlebar, saddle, fender, etc. of the vehicle 150, or any other suitable location for securing the image capture device. In some embodiments, image capture device 151 may transmit the acquired image data to one or more components of canonical parking system 100 (e.g., server 110, terminal device 130, and/or storage device 140) via network 120. In some embodiments, image acquisition device 151 may include a processor and/or memory. In some embodiments, image capture device 151 may include a wide angle camera, a fisheye camera, a monocular camera (binocular camera), a depth camera (RGBD camera), a dome camera, an infrared camera, a Digital Video Recorder (DVR), etc., or any combination thereof. In some embodiments, the image acquired by the image acquisition device 151 may be a two-dimensional image, a three-dimensional image, a four-dimensional image, or the like.
It should be noted that the above description is merely for convenience and is not intended to limit the present application to the scope of the illustrated embodiments. It will be understood by those skilled in the art that, having the benefit of the teachings of this system, various modifications and changes in form and detail may be made to the field of application for which the method and system described above may be practiced without departing from this teachings.
FIG. 2 is a block diagram of an exemplary processing device shown in accordance with some embodiments of the present application. As shown in fig. 2, the processing device 112 may include an acquisition module 210 and a determination module 220.
The acquisition module 210 may be configured to acquire a sequence of images of a target vehicle before parking. For example, the sequence of images may be captured by one or more image capture devices mounted on the subject vehicle and transmitted to one or more components of the canonical parking system 100 via a wireless connection. As another example, the sequence of images may be captured by one or more image capture devices mounted on the subject vehicle and transmitted to one or more components of the canonical parking system 100 via a wireless connection after image processing. In some embodiments, the sequence of images may be used to determine a movement trajectory of the target vehicle within a preset time. In some embodiments, the acquisition module 210 may also be used to acquire a second image of the target vehicle after parking.
The determination module 220 may be used to determine whether the target vehicle parking is normative. In some embodiments, the determination module 220 may include a movement trajectory determination module 221, a parking orientation determination module 222, and a parking specification determination module 223. The movement trajectory determination module 221 may be configured to determine a movement trajectory of the target vehicle before parking based on the sequence of images. For example, based on the image sequence, the movement trajectory determination module 221 may determine the movement trajectory of the target vehicle before parking by positioning and mapping the target vehicle through a Simultaneous positioning and mapping (SLAM) technique. The parking orientation determination module 222 may be configured to determine a parking orientation of the target vehicle based on a movement trajectory of the target vehicle before parking. For example, the parking orientation determination module 222 may determine the last movement trajectory of the target vehicle based on the movement trajectory of the target vehicle before parking, and determine the parking orientation of the target vehicle based on the last movement trajectory of the target vehicle. The parking specification determination module 223 may be configured to determine whether parking of the target vehicle is in compliance with the specification based on the parking orientation of the target vehicle. For example, the parking specification determination module 223 may compare the parking orientation of the target vehicle with the positional orientation of the parking area. The parking specification determination module 223 may determine that the parking of the target vehicle is in specification when the parking orientation of the target vehicle is within a set threshold angle from the positional orientation of the parking area. The parking specification determination module 223 may determine that the parking of the target vehicle is out of specification when the parking orientation of the target vehicle is not within the set threshold angle from the positional orientation of the parking area.
The determination module 220 may also be used to determine whether the target vehicle is parked in the parking area. For example, the determination module 220 may determine whether the target vehicle has a parking frame line around the target vehicle based on the second image. The determination module 220 may determine that the target vehicle is located within the parking area when the target vehicle includes at least two parking frame lines around the target vehicle.
It should be understood that the system and its modules shown in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the control device 200 and its modules is merely for convenience of description and should not limit the present application to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, the movement track determining module 221, the parking orientation determining module 222 and the parking specification determining module 223 disclosed in fig. 2 may be different modules in a system, or may be a module that implements the functions of two or more modules. The modules in the processing device 112 may share one memory module, and each module may have its own memory module. As another example, the processing device 112 may also include a reminder module to remind the user that parking is not normative and requires re-parking. Such variations are within the scope of the present application.
FIG. 3 is an exemplary flow chart of a normative parking method according to some embodiments of the present application. In some embodiments, one or more steps of the canonical parking method 300 may be implemented in the canonical parking system 100 shown in fig. 1. For example, one or more steps of method 300 may be stored as instructions in storage device 140 and invoked and/or executed by processing device 112. In some embodiments, the normative parking method 300 may be implemented by a processor on the target vehicle.
At step 310, the processing device 112 (e.g., the acquisition module 210) may acquire a sequence of images of the target vehicle before parking.
The target vehicle may be a vehicle that the user is about to park or return. The target vehicle may include a bicycle, an electric bicycle, a human powered vehicle, a travel tool, an automobile, a taxi, a private car, a windmill, a bus, a truck, an unmanned car, etc., or any combination thereof. For example, the target vehicle may be a bicycle that the user is about to park. For another example, the target vehicle may be an automobile to which the user is to return.
The image sequence may be at least one set of image data arranged in an order (e.g., temporal, spatial, etc.). The image data may include coordinate points, pixel points, image frames, video streams, etc., or any combination thereof. In some embodiments, the sequence of images may include a plurality of images acquired in succession. In some embodiments, the sequence of images may be captured by one or more image capture devices (e.g., image capture device 151) installed on the target vehicle (e.g., vehicle 150) and transmitted to one or more components of the canonical parking system 100 (e.g., server 110, network 120, terminal device 130, storage device 140, vehicle 150) via a wireless connection (e.g., WiFi, bluetooth, etc.). The image acquisition device may include a wide-angle camera, a fisheye camera, a monocular camera, a binocular camera (binocular camera), a depth camera (RGBD camera), a dome camera, an infrared camera, a Digital Video Recorder (DVR), etc., or any combination thereof. For example, a monocular camera on the target vehicle may acquire the sequence of images, and via WiFi transmission, vehicle 150 may acquire the sequence of images in real-time. For another example, upon receiving an instruction to transmit a sequence of images, the processing device 112 may acquire a sequence of images captured by a depth camera on the subject vehicle. In some embodiments, the sequence of images may be captured by one or more image capture devices mounted on the subject vehicle and, after image processing, transmitted to one or more components of canonical parking system 100 (e.g., server 110, network 120, terminal device 130, storage device 140, vehicle 150) via a wireless connection (e.g., WiFi, bluetooth, etc.). Image processing may include image transformation, image compression, image segmentation, feature point extraction, and the like, or any combination thereof. For example, a sequence of images acquired by an image capture device on the target vehicle may undergo feature point extraction to obtain a set of coordinates, and vehicle 150 may acquire the set of coordinates via WiFi transmission. In some embodiments, the sequence of images may be used to determine a movement trajectory of the target vehicle within a preset time. The preset time may include at least a time period from outside the parking area to inside the parking area before the target vehicle is parked. In some embodiments, the preset time may be 10 seconds, 20 seconds, 30 seconds, 40 seconds, 50 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, etc., or any other suitable time period.
At step 320, the processing device 112 (e.g., the determination module 220, the movement trajectory determination module 221) may determine a movement trajectory of the target vehicle before parking based on the sequence of images.
In some embodiments, based on the image sequence, the moving trajectory of the target vehicle before parking may be determined by locating and mapping the target vehicle through a Simultaneous localization and mapping (SLAM) technique. The synchronous positioning and mapping technology is a method for acquiring a group of image sequences through an image acquisition device, tracking and setting key points, positioning the three-dimensional position of the image acquisition device by using a trigonometric algorithm, and approximating and deducing the posture of the image acquisition device by using position information. In some embodiments, the synchronized positioning and mapping technique may include a Visual synchronized positioning and mapping technique (Visual SLAM), a Visual inertial synchronized positioning and mapping technique, and the like. According to the type of the environment map in the SLAM problem, the synchronized positioning and mapping technology may include Feature-based SLAM (Feature-based SLAM), Grid-based SLAM (Grid-based SLAM), Topological-Grid map (Topological-Metric SLAM), density map-based SLAM (density SLAM), and the like. According to the difference of the description of the model in the SLAM problem, the synchronous localization and mapping technique may include algorithms based on state space description (such as Extended Kalman Filtering (EKF), Compressed Extended Kalman Filtering (CEKF), etc.), algorithms based on sample set description (such as Rao-Blackwellized particle filtering SLAM, Fast SLAM, DP-SLAM, etc.), algorithms based on Information space description (such as Extended Information Filtering (EIF), sparse join-Tree filtering (Thin Junction-Tree Filter, TJTF), etc.), algorithms based on difference description (such as Scan Matching), etc. In some embodiments, the synchronized positioning and mapping technique may include the steps of front-end visual odometry, back-end optimization, closed-loop detection, composition, and so forth. The front-end visual odometer can be used for estimating the motion of the image acquisition device between adjacent images and the appearance of a local map, namely the motion relation between the two images. The odometer can calculate the movement at adjacent moments without correlation with other information. The motion of adjacent moments is connected in series to form a moving track of the image acquisition device. The image acquisition device can be fixed on the target vehicle, so that the movement track of the image acquisition device can be equivalent to that of the target vehicle, and the positioning problem is solved. In addition, according to the position of the image acquisition device at each moment, the position of a spatial point corresponding to each frame of image in the image series can be calculated, so that a map of the movement track of the image acquisition device, namely the map of the movement track of the target vehicle is obtained. The front-end visual odometer may include feature extraction and matching of images, etc. Back-end optimization can be used to deal with the problem of noise in the SLAM process. For example, back-end optimization may be used to optimize the data provided by the front-end visual odometer as well as the initial values of the data. In some embodiments, the back-end optimization may include filtering and non-linear optimization algorithms. Loop detection, which may also be referred to as closed loop detection, refers to the ability of the canonical parking system 100 to identify a scene that was arrived at. If the detection is successful, the accumulated error can be significantly reduced. Loop detection may include algorithms for detecting observation similarity. An exemplary algorithm may include a Bag of Words model (Bag-of-Words, BoW). Specifically, the bag-of-words model may cluster visual features (SIFT, SURF, etc.) in the image, then build a dictionary, and then find which "words" (word) are contained in each image. In some embodiments, loop detection may also include training a classifier. For example, loop back detection may separate identified scenes into arrived scenes and not arrived scenes. The mapping may be used to build a map corresponding to the task requirements based on the estimated trajectory. The representation of the map may include a grid map, a direct representation, a topological map, a feature point map, and the like. Wherein the feature point map may represent the environment with relevant geometric features (e.g., points, lines, planes).
By locating and mapping the target vehicle using the simultaneous location and mapping technique, the processing device 112 may determine the movement trajectory of the target vehicle before parking. Referring to fig. 5, fig. 5 is a schematic diagram of a movement trajectory and parking area of a target vehicle according to some embodiments of the present application. As shown in fig. 5, a line 500 is a moving track of the target vehicle before parking, which is determined by the synchronous positioning and mapping technique.
At step 330, the processing device 112 (e.g., the determination module 220, the parking orientation determination module 222) may determine a parking orientation of the target vehicle based on the movement trajectory of the target vehicle before parking.
In some embodiments, the processing device 112 may determine the last movement trajectory of the target vehicle based on the movement trajectory of the target vehicle before stopping. The final movement trajectory may be a movement trajectory before the target vehicle is parked. For example, the last movement trajectory may be the last straight line on the movement trajectory containing the end point. Referring to fig. 5, line 501 may be the last movement trajectory of the target vehicle.
In some embodiments, based on the last movement trajectory of the target vehicle, a parking orientation of the target vehicle may be determined. Since the parking orientation of the target vehicle is the same as the direction of the last moving trajectory of the target vehicle, the direction of the line 501 in fig. 5 may represent the parking orientation of the target vehicle.
At step 340, the processing device 112 (e.g., the determination module 220, the parking specification determination module 223) may determine whether parking of the target vehicle meets the specification based on the parking orientation of the target vehicle.
In some embodiments, the processing device 112 (e.g., the determination module 220, the parking specification determination module 223) may compare the parking orientation of the target vehicle to the positional orientation of the parking area. The parking area may be an area where the target vehicle is to be parked or returned. In some embodiments, parking areas may be determined in some manner. Exemplary means of determination include marking, laying a special logo (e.g., stop sign, etc.), the like, or any combination thereof. The position orientation of the parking area may be a parking direction preset for the parking area, for example, a direction parallel to a long side of the parking area, a direction parallel to a short side of the parking area, or the like. As shown in fig. 5, the areas defined by the white lines are parking areas, and the areas 510, 520, 530, 540, 550, and 560 are each a parking area. The processing device 112 may preset the direction of arrow a as the positional orientation of the parking area. In some embodiments, the processing device 112 may determine whether the target vehicle is parked in the parking area before, while, or after determining whether the parking of the target vehicle is in compliance with the specification based on the parking orientation of the target vehicle. Further description regarding determining whether a target vehicle is parked in a parking area may be found elsewhere in the present application (e.g., fig. 4 and its description).
In some embodiments, the processing device 112 may acquire at least one first image of the sequence of images containing parking area information; and identifying the position orientation of the parking area by using the image identification model based on the at least one first image. The image recognition model may include a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN) model, a long-term short-term memory (LSTM) network model, a countermeasure network (GAN) model, an Artificial Neural Network (ANN) model, and the like. The processing device 112 may train an initial image recognition model through the sample parking area, at least one set of sample images containing the sample parking area, and corresponding location orientation labels to obtain a final model (i.e., the image recognition model) that meets the requirements. In particular, the processing device 112 may input at least one first image of the image sequence comprising parking area information into the image recognition model, which may output the position orientation of the parking area. In some embodiments, the processing device 112 may identify an identification of the parking area with respect to the position orientation (e.g., an arrow indicating the parking direction) based on the at least one first image and determine the position orientation of the parking area based on the identification.
In some embodiments, the processing device 112 may determine the parking area and the positional orientation of the parking area by synchronizing the locations determined by the localization and mapping techniques. For example, information of all parking areas in a city (for example, information of the number, position orientation, and the like of the parking areas) may be input to the storage device 140 in advance. The location of the parking area to be used can be determined by means of a simultaneous localization and mapping technique. Processing device 112 may determine (e.g., recall from storage device 140) the parking area and the positional orientation of the parking area from the location of the parking area.
When the angle of difference between the parking orientation of the target vehicle and the positional orientation of the parking area is within a set threshold range, the processing device 112 may determine that the parking of the target vehicle is in compliance with the specification. When the angle of difference between the parking orientation of the target vehicle and the positional orientation of the parking area is not within the set threshold range, the processing device 112 may determine that the parking of the target vehicle is out of specification. The differential angle may be an angle between a parking orientation of the target vehicle and a positional orientation of the parking area. The set threshold range may be an angle range set according to a parking specification, or an angle range set by a person. As shown in fig. 5, if the direction of arrow a is the positional orientation of the preset parking area, the differential angle may be the angle between the direction of arrow a and the line 501. Accordingly, the set threshold range may be 60-120 °. Preferably, the set threshold range may be 75-105 °. Preferably, the set threshold range may be 80-100 °. Preferably, the set threshold range may be 85-95 °. Preferably, the set threshold range may be 90 °. For example, the processing device 112 may set the threshold range to 80-100 °. When the parking orientation of the target vehicle differs from the positional orientation of the parking area by an angle of 85 ° (i.e., the parking orientation of the target vehicle differs from the positional orientation of the parking area by an angle within a set threshold), the processing device 112 may determine that the parking of the target vehicle is in compliance with the specification. When the parking orientation of the target vehicle differs from the positional orientation of the parking area by 120 ° (i.e., the parking orientation of the target vehicle differs from the positional orientation of the parking area by an angle that is not within the set threshold range), the processing device 112 may determine that the parking of the target vehicle is out of specification.
In some embodiments, when it is determined that the parking of the target vehicle does not meet the specification, the processing device 112 may transmit a command to prohibit locking the vehicle to the target vehicle, control the target vehicle to broadcast a first prompt tone, or transmit first prompt information to a user terminal (e.g., the terminal device 130) corresponding to the target vehicle. The first alert tone may be used to alert the user that the parking of the target vehicle is not in compliance with the specification. For example, the first alert tone may include a voice prompt (e.g., "vehicle parking out of specification, please park again") or a ring tone prompt. The first prompt may inform the user that the parking of the target vehicle is not in compliance with the specification, as well as the reason and improvement method. In some embodiments, the first prompting message may prompt the user in the form of a short message, an email, an in-application notification, or the like. For example, the processing device 112 may send a short message of the first prompt message to the terminal device 130, and the content of the short message may be "the user is good, your vehicle is parked not in compliance with the specification, please park again". For another example, the processing device 112 may send an in-application notification of the first prompt message to the terminal device 130, and the content of the notification may be "the user is good, your vehicle is parked inclined, please park again".
It should be noted that the above description of method 300 is for purposes of example and illustration only and is not intended to limit the scope of applicability of the present application. Various modifications and alterations to method 300 will be apparent to those skilled in the art in light of the present application. However, such modifications and variations are intended to be within the scope of the present application. The present application may add some other steps such as storing and/or updating steps. For example, step 320, step 330, and step 340 may be combined. As another example, processing device 112 may store information and/or data (image recognition models, image sequences, movement trajectories, position orientations, etc.) in a memory (e.g., storage device 140) associated with canonical parking system 100.
FIG. 4 is an exemplary flow chart illustrating the determination of whether a target vehicle is within a parking area according to some embodiments of the present application. In some embodiments, one or more steps of the method 400 of determining whether a target vehicle is within a parking area may be implemented in the canonical parking system 100 shown in fig. 1. For example, one or more steps of method 400 may be stored as instructions in storage device 140 and invoked and/or executed by processing device 112. In some embodiments, the method 400 of determining whether a target vehicle is within a parking area may be implemented by a processor on the target vehicle.
At step 410, the processing device 112 (e.g., the acquisition module 210) may acquire a second image of the target vehicle after parking.
The second image is an image of the target vehicle acquired after parking. In some embodiments, the second image may be acquired by one or more image capture devices installed on the subject vehicle after the subject vehicle is parked and transmitted to one or more components of the canonical parking system 100 (e.g., server 110, network 120, terminal device 130, storage device 140, vehicle 150) via a wireless connection (e.g., WiFi, bluetooth, etc.). For more description of the image capture device and image capture, reference may be made to step 310 in FIG. 3.
At step 420, the processing device 112 (e.g., the determination module 220) may determine whether the target vehicle is within the parking area based on the second image.
In some embodiments, the processing device 112 may determine whether the target vehicle has a parking frame line around based on the second image. The parking frame line may be used to determine a parking area. For example, referring to fig. 5, the parking frame line may be a white thick line. Also for example, the parking frame line may be a special logo (e.g., parking plate, etc.) that is laid. In some embodiments, the processing device 112 may determine whether the target vehicle has a parking box line around based on the second image using a machine learning model and/or an image recognition algorithm. For example, the processing device 112 may determine whether the target vehicle has a parking frame line around based on the second image using the trained machine learning model. The trained machine learning models may include Convolutional Neural Network (CNN) models, Recurrent Neural Network (RNN) models, long-term short-term memory (LSTM) network models, confrontation network (GAN) models, Artificial Neural Network (ANN) models, and the like. The processing device 112 may train an initial machine learning model through the sample parking area, the at least one set of sample images, and the corresponding parking labels to obtain a final model (i.e., a trained machine learning model) that satisfies the requirements. The processing device 112 may input the second image into the trained machine learning model, and the trained machine learning model may output a determination result of whether the target vehicle has a parking frame line all around. The judgment result may be the number of parking frame lines, for example, 0, 1, 2, 3, 4, etc. For another example, the processing device 112 may determine whether the target vehicle has a parking frame line around based on the second image using an image recognition algorithm. The image recognition algorithm may be an algorithm capable of detecting a line, including, but not limited to, a Hough-line detection algorithm, an LSD line detection algorithm, an FLD line detection algorithm, an EDlines line detection algorithm, an LSWMS line detection algorithm, a CannyLines line detection algorithm, an MCMLSD line detection algorithm, an LSM line detection algorithm, and the like. Specifically, the processing device 112 may perform operations such as gray-scale transformation, gaussian filtering, edge detection, etc. on the second image. By setting a region of interest (e.g., a lower region of the second image) and acquiring edge information of the region of interest, a parking frame line of the parking area can be acquired. The processing device 112 may then perform a hough transform on the second image. Through Hough transform, the independent pixel points of the parking frame lines forming the parking area obtained by the operation can be connected into a line. That is, the hough transform may find a straight line in the second image through the pixel points. Finally, the processing device 112 may fit the discontinuous straight lines obtained by the hough transform (e.g., average the slope and intercept of the same side in a picture and draw a complete straight line using the averaged parameters).
In some embodiments, the processing device 112 may determine that the target vehicle is located within the parking area when the target vehicle includes at least two parking frame lines around the target vehicle. For example, when the target vehicle includes 2 or more parking frame lines around the target vehicle, the processing device 112 may determine that the target vehicle is located within the parking area. For another example, when the parking frame line included around the target vehicle is 0 or 1, the processing device 112 may determine that the target vehicle is not located in the parking area.
In some embodiments, when it is determined that the target vehicle is not within the parking area, the processing device 112 may transmit a command to prohibit locking of the vehicle to the target vehicle, control the target vehicle to broadcast a second prompt tone, or transmit second prompt information to a user terminal (e.g., the terminal device 130) corresponding to the target vehicle. The second alert tone may be used to alert the user that the target vehicle is not within the parking area. For example, the second alert tone may include a voice alert (e.g., "vehicle is not in parking area, please park again") or a ring alert. The second prompting message may inform the user that the target vehicle is not in the parking area and improve the method. In some embodiments, the second prompting message may prompt the user in the form of a short message, an email, an in-application notification, or the like. For example, the processing device 112 may send a short message of the second prompt message to the terminal device 130, and the content of the short message may be "the user is good, your vehicle is not in the parking area, please park again". For another example, the processing device 112 may send an in-application notification of the second prompt message to the terminal device 130, and the content of the notification may be "the user is good, your vehicle is not in the parking area, please park again".
It should be noted that the above description of method 400 is for purposes of example and illustration only and is not intended to limit the scope of applicability of the present application. Various modifications and alterations to method 400 will be apparent to those skilled in the art in light of the present application. However, such modifications and variations are intended to be within the scope of the present application. For example, processing device 112 may store information and/or data (trained machine learning models, second images, parking areas, etc.) in a memory (e.g., storage device 140) associated with canonical parking system 100.
The beneficial effects that may be brought by the embodiments of the present application include, but are not limited to: (1) the parking orientation of the target vehicle can be accurately determined based on the image sequence, so that whether the parking of the target vehicle meets the specification or not is effectively judged; (2) whether the target vehicle is in the parking area can be effectively determined through the second image after the target vehicle is parked; (3) through reminding when the parking is not standard or in the parking area, the parking behavior of the user can be effectively standardized, and the vehicle service efficiency is improved. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the broad application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visualbasic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the present application. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present application can be viewed as being consistent with the teachings of the present application. Accordingly, the embodiments of the present application are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method for normalizing parking, comprising:
acquiring an image sequence of a target vehicle before parking;
determining a moving track of the target vehicle before parking based on the image sequence;
determining a parking orientation of the target vehicle based on a moving track of the target vehicle before parking; and
determining whether parking of the target vehicle is in compliance with a specification based on the parking orientation of the target vehicle.
2. The method of claim 1, wherein the determining whether the parking of the target vehicle is normative based on the parking orientation of the target vehicle comprises:
and comparing the parking orientation of the target vehicle with the position orientation of the parking area, and determining that the parking of the target vehicle is in accordance with the specification when the difference angle between the parking orientation of the target vehicle and the position orientation of the parking area is within a set threshold range.
3. The method of claim 2, further comprising:
acquiring at least one first image containing parking area information in the image sequence; and
based on the at least one first image, identifying a position orientation of the parking area using an image recognition model.
4. The method of claim 1, further comprising: when it is determined that the target vehicle is out of specification, performing at least one of:
sending a vehicle locking prohibition instruction to the target vehicle;
controlling the target vehicle to broadcast a first prompt tone; alternatively, the first and second electrodes may be,
and sending first prompt information to a user terminal corresponding to the target vehicle.
5. The method of claim 1, further comprising:
acquiring a second image of the target vehicle after parking; and
determining whether the target vehicle is within a parking area based on the second image.
6. The method of claim 5, wherein the determining whether the target vehicle is within a parking area based on the second image comprises:
judging whether the periphery of the target vehicle has a parking frame line or not based on the second image; and
and when the periphery of the target vehicle comprises at least two parking frame lines, determining that the target vehicle is positioned in the parking area.
7. The method of claim 5, further comprising: when the target vehicle is not within the parking area, performing at least one of:
sending a vehicle locking prohibition instruction to the target vehicle;
controlling the target vehicle to broadcast a second prompt tone; alternatively, the first and second electrodes may be,
and sending second prompt information to the user terminal corresponding to the target vehicle.
8. The standard parking system is characterized by comprising an acquisition module, a moving track determining module, a parking direction determining module and a parking standard determining module;
the acquisition module is used for acquiring a first image sequence of the target vehicle before parking;
the moving track determining module is used for determining a moving track of the target vehicle before parking based on the first image sequence;
the parking orientation determination module is used for determining the parking orientation of the target vehicle based on the moving track of the target vehicle before parking; and
the parking specification determination module is configured to determine whether parking of the target vehicle meets a specification based on the parking orientation of the target vehicle.
9. A normative parking apparatus, comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least a portion of the computer instructions to implement the canonical parking method of any of claims 1-7.
10. A computer-readable storage medium, wherein the storage medium stores computer instructions that, when executed by a processor, implement the canonical parking method of any of claims 1-7.
CN202110386027.9A 2021-04-09 2021-04-09 Standard parking method, system, device and storage medium Pending CN113076896A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110386027.9A CN113076896A (en) 2021-04-09 2021-04-09 Standard parking method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110386027.9A CN113076896A (en) 2021-04-09 2021-04-09 Standard parking method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN113076896A true CN113076896A (en) 2021-07-06

Family

ID=76617288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110386027.9A Pending CN113076896A (en) 2021-04-09 2021-04-09 Standard parking method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN113076896A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333390A (en) * 2021-12-29 2022-04-12 北京精英路通科技有限公司 Method, device and system for detecting shared vehicle parking event
CN114613073A (en) * 2022-04-18 2022-06-10 宁波小遛共享信息科技有限公司 Vehicle returning control method and device for shared vehicle and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001006097A (en) * 1999-06-25 2001-01-12 Fujitsu Ten Ltd Device for supporting driving for vehicle
CN206331594U (en) * 2017-01-03 2017-07-14 上海量明科技发展有限公司 The decision maker and system of shared vehicle parking position
CN109637154A (en) * 2019-01-25 2019-04-16 合肥市智信汽车科技有限公司 A kind of on-vehicle vehicle is separated to stop automatic identification and grasp shoot device
CN110706509A (en) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 Parking space and direction angle detection method, device, equipment and medium thereof
CN110901632A (en) * 2019-11-29 2020-03-24 长城汽车股份有限公司 Automatic parking control method and device
CN111222385A (en) * 2018-11-27 2020-06-02 千寻位置网络有限公司 Method and device for detecting parking violation of bicycle, shared bicycle and detection system
CN111862670A (en) * 2020-05-21 2020-10-30 北京骑胜科技有限公司 Vehicle parking method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001006097A (en) * 1999-06-25 2001-01-12 Fujitsu Ten Ltd Device for supporting driving for vehicle
CN206331594U (en) * 2017-01-03 2017-07-14 上海量明科技发展有限公司 The decision maker and system of shared vehicle parking position
CN111222385A (en) * 2018-11-27 2020-06-02 千寻位置网络有限公司 Method and device for detecting parking violation of bicycle, shared bicycle and detection system
CN109637154A (en) * 2019-01-25 2019-04-16 合肥市智信汽车科技有限公司 A kind of on-vehicle vehicle is separated to stop automatic identification and grasp shoot device
CN110706509A (en) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 Parking space and direction angle detection method, device, equipment and medium thereof
CN110901632A (en) * 2019-11-29 2020-03-24 长城汽车股份有限公司 Automatic parking control method and device
CN111862670A (en) * 2020-05-21 2020-10-30 北京骑胜科技有限公司 Vehicle parking method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333390A (en) * 2021-12-29 2022-04-12 北京精英路通科技有限公司 Method, device and system for detecting shared vehicle parking event
CN114333390B (en) * 2021-12-29 2023-08-08 北京精英路通科技有限公司 Method, device and system for detecting shared vehicle parking event
CN114613073A (en) * 2022-04-18 2022-06-10 宁波小遛共享信息科技有限公司 Vehicle returning control method and device for shared vehicle and electronic equipment
CN114613073B (en) * 2022-04-18 2023-09-08 浙江小遛信息科技有限公司 Vehicle returning control method and device for shared vehicle and electronic equipment

Similar Documents

Publication Publication Date Title
CN111937050B (en) Passenger related item loss reduction
US8929604B2 (en) Vision system and method of analyzing an image
CN109510851A (en) The construction method and equipment of map datum
US11573084B2 (en) Method and system for heading determination
CN111256693B (en) Pose change calculation method and vehicle-mounted terminal
CN113076896A (en) Standard parking method, system, device and storage medium
US11308324B2 (en) Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN112654998B (en) Lane line detection method and device
US11377125B2 (en) Vehicle rideshare localization and passenger identification for autonomous vehicles
CN113099385B (en) Parking monitoring method, system and equipment
US11663807B2 (en) Systems and methods for image based perception
CN115342811A (en) Path planning method, device, equipment and storage medium
CN109711363B (en) Vehicle positioning method, device, equipment and storage medium
WO2021051230A1 (en) Systems and methods for object detection
CN217787820U (en) Vehicle control apparatus
US20230043716A1 (en) Systems and Methods for Image Based Perception
EP4131174A1 (en) Systems and methods for image based perception
US20230084623A1 (en) Attentional sampling for long range detection in autonomous vehicles
US20230075425A1 (en) Systems and methods for training and using machine learning models and algorithms
WO2024109079A1 (en) Parking space opening detection method and device
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving
WO2020073268A1 (en) Snapshot image to train roadmodel
WO2020073270A1 (en) Snapshot image of traffic scenario
CN115527021A (en) User positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination