CN115565158A - Parking space detection method and device, electronic equipment and computer readable medium - Google Patents

Parking space detection method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN115565158A
CN115565158A CN202211437221.6A CN202211437221A CN115565158A CN 115565158 A CN115565158 A CN 115565158A CN 202211437221 A CN202211437221 A CN 202211437221A CN 115565158 A CN115565158 A CN 115565158A
Authority
CN
China
Prior art keywords
parking space
parking
initial
sample
lot image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211437221.6A
Other languages
Chinese (zh)
Other versions
CN115565158B (en
Inventor
齐新迎
李敏
龙文
张洋
罗鸿
蔡仲辉
陶武康
刘智睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Aion New Energy Automobile Co Ltd
Original Assignee
GAC Aion New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Aion New Energy Automobile Co Ltd filed Critical GAC Aion New Energy Automobile Co Ltd
Priority to CN202211437221.6A priority Critical patent/CN115565158B/en
Publication of CN115565158A publication Critical patent/CN115565158A/en
Application granted granted Critical
Publication of CN115565158B publication Critical patent/CN115565158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a parking space detection method, a parking space detection device, electronic equipment and a computer readable medium. One embodiment of the method comprises: acquiring an initial parking lot image set; merging each initial parking lot image of the initial parking lot image set to obtain a merged parking lot image; adjusting the combined parking lot image to obtain a target parking lot image; inputting the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information; comparing the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result; and sending the parking space detection information and the comparison result to a control terminal to control the current vehicle to park. This embodiment can improve the accuracy of automatic parking of the vehicle.

Description

Parking space detection method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a parking space detection method, a parking space detection device, parking space detection equipment and a computer readable medium.
Background
The parking space detection can be used in an automatic parking technology, and can help the control terminal to carry out automatic parking according to detected parking space information (such as parking space occupation marks). At present, when carrying out the parking stall and detecting, the mode that usually adopts is: and using the relevant target detection model, extracting parking space detection information from the image, and sending the parking space detection information to the control terminal for parking.
However, the inventor finds that when the parking space information is detected in the above manner, the following technical problems often exist:
firstly, the existing visual recognition model is difficult to recognize incomplete parking spaces in images, so that the accuracy of parking space detection is reduced, and the accuracy of automatic parking of vehicles is reduced;
secondly, under the condition that the parking space line is damaged, the accuracy of parking space information detected by the existing visual recognition model is reduced, so that the accuracy of parking space detection is reduced, and further the accuracy of parking is reduced;
thirdly, the parking space at the image junction is deformed and distorted, which results in the reduction of the accuracy of parking space detection and the reduction of the accuracy of automatic parking of the vehicle.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure provide a parking space detection method, apparatus, electronic device and computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a parking space detection method, including: acquiring an initial parking lot image set; merging each initial parking lot image of the initial parking lot image set to obtain a merged parking lot image; adjusting the merged parking lot image to obtain a target parking lot image; inputting the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information; comparing the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result; and sending the parking space detection information and the comparison result to a control terminal to control the current vehicle to park.
In a second aspect, some embodiments of the present disclosure provide a parking space detection device, including: an acquisition unit configured to acquire an initial parking lot image set; a merging unit configured to merge the initial parking lot images of the initial parking lot image set to obtain a merged parking lot image; an adjusting unit configured to adjust the merged parking lot image to obtain a target parking lot image; the input unit is configured to input the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information; the comparison unit is configured to compare the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result; and the sending unit is configured to send the parking space detection information and the comparison result to a control terminal so as to control the current vehicle to park.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device, on which one or more programs are stored, which when executed by one or more processors cause the one or more processors to implement the method described in any implementation of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer-readable medium on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method described in any implementation manner of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the parking space detection method of some embodiments of the present disclosure, the accuracy of automatic parking of a vehicle can be improved. Specifically, the reason why the accuracy of automatic parking of the vehicle is reduced is that: the existing visual recognition algorithm is difficult to recognize incomplete parking spaces in images, so that the accuracy of parking space detection is reduced. Based on this, the parking space detection method of some embodiments of the present disclosure first obtains an initial parking lot image set. Thus, a parking lot surround view image set photographed by the vehicle-mounted camera can be acquired. And secondly, merging the initial parking lot images of the initial parking lot image set to obtain a merged parking lot image. Therefore, a more accurate parking lot image can be obtained. And then, adjusting the merged parking lot image to obtain a target parking lot image. This can improve the accuracy of the obtained parking lot image. And then, inputting the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information. Therefore, the pre-trained neural network detection model can identify incomplete parking spaces in the images of the parking lot, and accurate parking space detection information can be obtained. And then, comparing the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result. Therefore, the parking space type information which is matched with the parking space detection information and is accurate can be obtained. And finally, sending the parking space detection information and the comparison result to a control terminal to control the current vehicle to park. Therefore, the control terminal can send out corresponding control instructions according to the accurate parking space detection information and the accurate parking space type information so as to control the vehicle to park. Therefore, according to the parking space detection method, the accurate parking lot image can be input into the neural network detection model, the incomplete parking spaces in the parking lot image can be identified, the parking space detection accuracy can be improved, and further the automatic parking accuracy of the vehicle can be improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a stall detection method according to the present disclosure;
FIG. 2 is a schematic structural diagram of some embodiments of a parking space detection device according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a stall detection method according to the present disclosure. The parking space detection method comprises the following steps:
step 101, an initial parking lot image set is obtained.
In some embodiments, the subject performing the parking space detection method may obtain the initial parking space image set from each vehicle-mounted camera of the target vehicle by means of a wired connection or a wireless connection. The target vehicle may be a vehicle that is parking. The initial parking lot images in the initial parking lot image set may be fish-eye images. One of the vehicle-mounted cameras in the set of vehicle-mounted cameras may correspond to one of the initial parking lot images in the set of initial parking lot images.
As an example, the vehicle-mounted cameras may be four external fisheye cameras mounted on a front bumper, a trunk, and a rear view mirror of the target vehicle.
And step 102, merging each initial parking lot image of the initial parking lot image set to obtain a merged parking lot image.
In some embodiments, the execution subject may combine the initial parking lot images of the initial parking lot image set to obtain a combined parking lot image.
In practice, the merging, by the executing body, each initial parking lot image of the initial parking lot image set to obtain a merged parking lot image may include the following steps:
first, camera parameter information is acquired. Wherein the camera parameter information may be acquired from the onboard camera.
As an example, the above-mentioned camera parameter information may include, but is not limited to, a camera focal length value.
And secondly, optimizing each initial parking lot image of the initial parking lot image set based on the camera parameter information to obtain a first parking lot image set. Wherein each initial parking lot image of the initial parking lot image set may be optimized by minimizing a reprojection error.
And thirdly, performing coordinate conversion on each first parking lot image of the first parking lot image set based on a preset camera distortion coordinate comparison table to obtain a second parking lot image set. The preset camera distortion coordinate comparison table may be obtained from the vehicle-mounted camera. The coordinate conversion may be performed on each first parking lot image of the first parking lot image set by looking up each second pixel coordinate included in the second parking lot image corresponding to each first pixel coordinate included in the first parking lot image from the preset camera distortion coordinate comparison table, and converting each first pixel coordinate into each second pixel coordinate to obtain the second parking lot image.
And fourthly, fusing all the second parking lot images in the second parking lot image set to obtain a fused parking lot image.
In practice, the above-mentioned fusing each second parking lot image in the second parking lot image set to obtain a fused parking lot image may include the following substeps:
the first substep, a set of camera position information and an initial look-around parking lot image are acquired. Wherein the camera position information set and the initial all-round parking lot image may be acquired from the on-vehicle camera. The camera position information in the set of camera position information may be indicative of a position of the onboard camera relative to the target vehicle.
And a second substep of fusing the initial all-round parking lot image and the second parking lot image set based on the camera position information set to obtain the fused parking lot image. The initial all-round parking lot image and the second parking lot image set may be fused in a joint calibration manner.
And fifthly, projecting the fused parking lot based on the camera parameter information to obtain the image of the fused parking lot. The fused parking lot image may include a set of pixel coordinates. And projecting the coordinates of each pixel point in the pixel point coordinate set from the two-dimensional coordinate system to the three-dimensional coordinate system in a three-dimensional texture mapping mode to obtain the combined parking lot image. Here, the pixel point coordinates may be two-dimensional coordinates. The two-dimensional coordinate system may be a pixel coordinate system. The three-dimensional coordinate system may be a texture coordinate system.
In practice, the optional technical content in step 102 is taken as an inventive point of the embodiment of the present disclosure, and solves the technical problem mentioned in the background of the invention, namely "accuracy reduction of parking". The factors that degrade the accuracy of the parking function tend to be as follows: the parking space at the image junction is distorted, so that the accuracy of parking space detection is reduced, and the accuracy of automatic parking of the vehicle is reduced. If the above factors are solved, the effect of improving the accuracy of parking can be achieved. In order to achieve the effect, the initial parking lot images acquired from the vehicle-mounted cameras can be combined, so that the distorted parking spaces can be spliced and subjected to distortion removal processing to obtain complete parking lot images, the accuracy of the parking lot images input to the neural network detection model is improved, the accuracy of parking space detection information can be improved, and the accuracy of autonomous parking of the target vehicle under the control of the control terminal of the target vehicle can be improved.
And 103, adjusting the combined parking lot image to obtain a target parking lot image.
In some embodiments, the execution subject may adjust the merged parking lot image to obtain a target parking lot image. The merged parking lot image may be adjusted by an optical flow method to obtain a target parking lot image.
And 104, inputting the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information.
In some embodiments, the execution subject may input the target parking lot image to a pre-trained neural network detection model to obtain parking space detection information. The pre-trained neural network detection model can be a pre-trained neural network model which takes the target parking lot image as input and takes the parking space detection information as output.
As an example, the above-described feature point extraction model trained in advance may be an object detection model. The size of the input target parking lot image may be 128DPI (Dots Per Inch) × 384DPI.128DPI, 384DPI are the length and width of the target parking lot image, respectively. The output parking space detection information may include a parking space detection image. Here, the size of the parking space detection image may be 4DPI × 12DPI. The 4DPI and the 12DPI are the length and the width of the parking space detection image respectively.
Alternatively, the pre-trained feature point extraction model may be obtained by training through the following steps:
in a first step, a sample parking lot image set is obtained. The sample parking lot image set can be obtained from a preset image data set.
As an example, the preset image data set described above may be, but is not limited to, a peer parking data set or a parking space detection data set (PIL _ PARK).
Secondly, based on the sample parking lot image set, executing the following training steps:
the first substep, choose the image of the parking lot of the sample from the above-mentioned image set of parking lot of the sample. In practice, training samples may be randomly selected from the set of training samples described above.
And in the second substep, marking the sample parking lot image to obtain sample parking space detection information. The sample parking lot image can be marked in a geometric abstract mode, and sample parking space detection information is obtained.
And a third substep, inputting the sample parking lot image into the initial neural network detection model to obtain initial parking space detection information.
By way of example, the initial neural network Detection model may include, but is not limited to, a convolutional neural network, a backbone (backbone) neural network, and a Detection head (Detection head) neural network.
And a fourth substep of determining a parking space detection difference value between the sample parking space detection information and the initial parking space detection information based on a preset loss function. The predetermined loss function may be an LSE (least square error) function. The initial parking space detection information comprises an initial parking space confidence value, an initial parking space angular point coordinate value set, an initial parking space entrance line length value, an initial parking space separation line length value and an initial parking space occupation identification value. The initial parking space confidence value can represent the authenticity of the parking space detection information. Each initial parking space angular point coordinate value in the initial parking space angular point coordinate value set can represent the coordinate of each angle of the parking space. The length value of the initial parking space entrance line can represent the length of the parking space entrance line. Here, the parking space entrance line may be a parking space line through which the vehicle passes when entering the parking space. The length value of the initial parking space separation line can represent the length of the parking space separation line. Here, the parking space dividing line may be a parking space line that divides two adjacent parking spaces. The initial parking space occupation identification value can represent whether the parking space is occupied or not.
As an example, the initial parking space occupation identification value is 1, which indicates that the parking space is occupied. The initial parking space occupation identification value is 0, which indicates that the parking space is not occupied.
And a fifth substep of determining the initial neural network detection model as the neural network detection model in response to determining that the parking space detection difference value is smaller than the target value. The target value is not limited to the above. For example, the above target value may be 0.5.
In some optional implementation manners of some embodiments, the performing a main body labeling the sample parking lot image to obtain the sample parking space detection information may include the following steps:
firstly, the sample parking lot image is adjusted to obtain a target sample parking lot image. The sample parking lot image may be cut and scaled, so that a target sample parking lot image with a size of 128DPI × 384DPI may be obtained. 128DPI, 384DPI are the length and width of the target specimen parking lot image, respectively.
And secondly, extracting image characteristics of the target sample parking lot image to obtain a sample parking space angular point coordinate value set and a sample parking space occupation identification value. The image feature extraction can be carried out on the target sample parking lot image through a convolutional neural network.
And thirdly, determining a sample parking space entrance line length value and a sample parking space separation line length value based on the sample angular point coordinate value set.
As an example, a distance value between two vertical sample angular point coordinate values in the sample angular point coordinate value set may be determined as a length value of the sample parking space entrance line. The distance value of two horizontal sample angular point coordinate values in the sample angular point coordinate value set can be determined as the length value of the sample parking space separation line.
And fourthly, fusing the sample parking space angle point coordinate value set, the sample parking space entrance line length value, the sample parking space separation line length value and the sample parking space occupation identification value to obtain the sample parking space detection information. The sample parking space detection information may be obtained by fusing the sample parking space angle point coordinate value set, the sample parking space entrance line length value, the sample parking space separation line length value and the sample parking space occupation identification value, and the sample parking space detection information may be determined by the sample parking space angle point coordinate value set, the sample parking space entrance line length value, the sample parking space separation line length value and the sample parking space occupation identification value included in the sample parking space detection information.
In some optional implementation manners of some embodiments, the performing a main body labeling the sample parking lot image to obtain the sample parking space detection information may include the following steps:
based on a preset loss function, executing the following processing substeps on the sample parking space detection information and the initial parking space detection information:
the first substep is to determine a parking space angle point coordinate difference value between each sample parking space angle point coordinate value in the sample parking space angle point coordinate value set and each initial parking space angle point coordinate value in the initial parking space angle point coordinate value set included in the sample parking space detection information, and obtain a parking space angle point coordinate difference value set. The parking space angle point coordinate difference value between each sample parking space angle point coordinate value in the sample parking space angle point coordinate value set and each initial parking space angle point coordinate value in the initial parking space angle point coordinate value set included in the sample parking space detection information can be determined through an LSE function.
And a second substep, determining the average value of the parking space angle point coordinate difference values in the parking space angle point coordinate difference value set as the average parking space angle point coordinate difference value.
And a third substep of determining a parking space entrance line length difference value between the sample parking space entrance line length value and the initial parking space entrance line length value. And determining a parking space entrance line length difference value between the sample parking space entrance line length value and the initial parking space entrance line length value through an LSE function.
And a fourth substep of determining a length difference value of the parking space dividing line between the sample parking space dividing line length value and the initial parking space dividing line length value. And determining the length difference value of the parking space separation line between the length value of the sample parking space separation line and the length value of the initial parking space separation line through an LSE function.
And a fifth substep of determining a parking space occupation identification difference value between the sample parking space occupation identification value and the initial parking space occupation identification value. And determining a parking space occupation identification difference value between the sample parking space occupation identification value and the initial parking space occupation identification value through an LSE function.
And a sixth substep of determining the sum of the initial parking space confidence value, the parking space angular point coordinate difference value, the parking space entrance line length difference value, the parking space separation line length difference value and the parking space occupation identification difference value as the parking space detection difference value.
Optionally, in response to determining that the parking space detection difference value is not less than the target value, the executing subject may adjust relevant parameters in the initial neural network detection model, determine the adjusted initial neural network detection model as the initial neural network detection model, and execute the training step again. The relevant parameters in the initial neural network detection model can be adjusted by solving error values from the difference values and preset difference values and transmitting the error values from the last layer of the model to the front layer by methods such as back propagation, random gradient descent and the like. Of course, according to the requirement, a network freezing (dropout) method may also be adopted, and network parameters of some layers are kept unchanged and are not adjusted, which is not limited in any way.
The optional technical content in step 104 is an inventive point of the embodiment of the present disclosure, and solves the technical problem mentioned in the background of the invention, i.e., "accuracy reduction of parking". The factors that degrade the accuracy of the parking function tend to be as follows: when the parking space line is damaged, the accuracy of parking space information detected by the existing visual recognition model is reduced, so that the accuracy of the parking space detection function is reduced, and further the accuracy of the parking function is reduced. If the above factors are solved, the effect of improving the accuracy of parking can be achieved. In order to achieve the effect, sample parking space detection information can be marked, the neural network detection model can extract initial parking space detection information through a backbone network, and then a difference value between the sample parking space detection information and the initial parking space detection information is determined through a preset loss function to be used for adjusting the model. Here, the pre-trained neural network detection model can only recognize the angular point coordinates of the parking space, so that the length value of the entrance line and the length value of the dividing line of the parking space can be determined, and therefore errors caused by the damaged parking space line can be reduced, the accuracy of parking space detection information can be improved, the accuracy of parking space detection is improved, and the accuracy of parking space can be improved.
And 105, comparing the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result.
In some embodiments, the execution subject may compare the parking space detection information with each piece of parking space information in a preset parking space information library to obtain a comparison result. The preset parking space information base can be a database for storing parking space type information. The comparison result may be the parking space type information matched with the parking space detection information in the parking space type information database.
And step 106, sending the parking space detection information and the comparison result to the control terminal to control the current vehicle to park.
In some embodiments, the execution subject may send the parking space detection information and the comparison result to a control terminal to control the current vehicle to park.
The above embodiments of the present disclosure have the following advantages: by the parking space detection method of some embodiments of the present disclosure, the accuracy of automatic parking of a vehicle can be improved. Specifically, the reason why the accuracy of automatic parking of the vehicle is reduced is that: the existing visual recognition algorithm is difficult to recognize incomplete parking spaces in images, so that the accuracy of parking space detection is reduced. Based on this, the parking space detection method of some embodiments of the present disclosure first obtains an initial parking lot image set. Thus, a parking lot surround view image set photographed by the vehicle-mounted camera can be acquired. And secondly, merging the initial parking lot images of the initial parking lot image set to obtain a merged parking lot image. Therefore, a more accurate parking lot image can be obtained. And then, adjusting the merged parking lot image to obtain a target parking lot image. This can improve the accuracy of the obtained parking lot image. And then, inputting the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information. Therefore, the pre-trained neural network detection model can identify incomplete parking spaces in the images of the parking lot, and accurate parking space detection information can be obtained. And then, comparing the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result. Therefore, the parking space type information which is matched with the parking space detection information and is accurate can be obtained. And finally, sending the parking space detection information and the comparison result to a control terminal to control the current vehicle to park. Therefore, the control terminal can send out corresponding control instructions according to the accurate parking space detection information and the accurate parking space type information so as to control the vehicle to park. Therefore, according to the parking space detection method, the accurate parking lot image can be input into the neural network detection model, the incomplete parking spaces in the parking lot image can be identified, the parking space detection accuracy can be improved, and further the automatic parking accuracy of the vehicle can be improved.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a parking space detection device, which correspond to those shown in fig. 1, and the parking space detection device may be specifically applied to various electronic devices.
As shown in fig. 2, the parking space detection device 200 of some embodiments includes: acquisition section 201, merging section 202, adjustment section 203, input section 204, comparison section 205, and transmission section 206. Wherein the obtaining unit 201 is configured to obtain an initial parking lot image set; a merging unit 202, configured to merge the initial parking lot images of the initial parking lot image set to obtain a merged parking lot image; an adjusting unit 203 configured to adjust the merged parking lot image to obtain a target parking lot image; an input unit 204 configured to input the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information; a comparing unit 205, configured to compare the parking space detection information with each piece of parking space information in a preset parking space information base, so as to obtain a comparison result; and a transmitting unit 206 configured to transmit the parking space detection information and the comparison result to a control terminal to control the current vehicle to park.
It is understood that the units described in the parking space detection apparatus 200 correspond to the respective steps in the parking space detection method described with reference to fig. 1. Therefore, the operations, characteristics and beneficial effects generated by the above described parking space detection method are also applicable to the parking space detection device 200 and the units contained therein, which are not described herein again.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an initial parking lot image set; merging each initial parking lot image of the initial parking lot image set to obtain a merged parking lot image; adjusting the merged parking lot image to obtain a target parking lot image; inputting the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information; comparing the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result; and sending the parking space detection information and the comparison result to a control terminal to control the current vehicle to park.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a merging unit, an adjustment unit, an input unit, a comparison unit, and a transmission unit. Where the names of the units do not in some cases constitute a limitation on the units themselves, for example, the acquisition unit may also be described as a "unit that acquires an initial parking lot image set".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. A parking space detection method comprises the following steps:
acquiring an initial parking lot image set;
merging each initial parking lot image of the initial parking lot image set to obtain a merged parking lot image;
adjusting the merged parking lot image to obtain a target parking lot image;
inputting the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information;
comparing the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result;
and sending the parking space detection information and the comparison result to a control terminal to control the current vehicle to park.
2. The method of claim 1, wherein the pre-trained neural network detection model is trained by:
acquiring a sample parking lot image set;
based on the sample parking lot image set, performing the following training steps:
selecting a sample parking lot image from the sample parking lot image set;
marking the sample parking lot image to obtain sample parking space detection information;
inputting the sample parking lot image into an initial neural network detection model to obtain initial parking space detection information;
determining a parking space detection difference value between the sample parking space detection information and the initial parking space detection information based on a preset loss function;
and determining the initial neural network detection model as the neural network detection model in response to determining that the parking space detection difference value is smaller than the target value.
3. The method of claim 2, wherein the method further comprises:
and responding to the situation that the parking space detection difference value is larger than or equal to the target value, adjusting related parameters in the initial neural network detection model, determining the adjusted initial neural network detection model as the initial neural network detection model, and executing the training step again.
4. The method of claim 2, wherein the labeling the sample parking lot image to obtain the sample parking space detection information comprises:
adjusting the sample parking lot image to obtain a target sample parking lot image;
carrying out image feature extraction on the target sample parking lot image to obtain a sample parking space angular point coordinate value set and a sample parking space occupation identification value;
determining a sample parking space entrance line length value and a sample parking space separation line length value based on the sample parking space angular point coordinate value set;
and fusing the sample parking space angle point coordinate value set, the sample parking space entrance line length value, the sample parking space separation line length value and the sample parking space occupation identification value to obtain the sample parking space detection information.
5. The method of claim 2, wherein said initial parking space detection information includes an initial parking space confidence value, an initial parking space corner point coordinate value set, an initial parking space entry line length value, an initial parking space separation line length value, and an initial parking space occupancy identification value; and
based on the loss function that predetermines, confirm sample parking stall detection information with parking stall detection difference value between the initial parking stall detection information includes:
based on a preset loss function, executing the following processing steps on the sample parking space detection information and the initial parking space detection information:
determining a parking space angular point coordinate difference value between each sample parking space angular point coordinate value in a sample parking space angular point coordinate value set and each initial parking space angular point coordinate value in an initial parking space angular point coordinate value set included in the sample parking space detection information to obtain a parking space angular point coordinate difference value set;
determining the average value of the parking space angular point coordinate difference values in the parking space angular point coordinate difference value set as the average parking space angular point coordinate difference value;
determining a parking space entrance line length difference value between the sample parking space entrance line length value and the initial parking space entrance line length value;
determining a length difference value of the parking space separation line between the length value of the sample parking space separation line and the length value of the initial parking space separation line;
determining a parking space occupation identification difference value between the sample parking space occupation identification value and the initial parking space occupation identification value;
and determining the sum of the initial parking space confidence value, the parking space angular point coordinate difference value, the parking space entrance line length difference value, the parking space separation line length difference value and the parking space occupation identification difference value as the parking space detection difference value.
6. A parking space detection device, comprising:
an acquisition unit configured to acquire an initial parking lot image set;
a merging unit configured to merge each initial parking lot image of the initial parking lot image set to obtain a merged parking lot image;
the adjusting unit is configured to adjust the combined parking lot image to obtain a target parking lot image;
the input unit is configured to input the target parking lot image into a pre-trained neural network detection model to obtain parking space detection information;
the comparison unit is configured to compare the parking space detection information with each piece of parking space information in a preset parking space information base to obtain a comparison result;
and the transmitting unit is configured to transmit the parking space detection information and the comparison result to a control terminal so as to control the current vehicle to park.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202211437221.6A 2022-11-17 2022-11-17 Parking space detection method, device, electronic equipment and computer readable medium Active CN115565158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211437221.6A CN115565158B (en) 2022-11-17 2022-11-17 Parking space detection method, device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211437221.6A CN115565158B (en) 2022-11-17 2022-11-17 Parking space detection method, device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN115565158A true CN115565158A (en) 2023-01-03
CN115565158B CN115565158B (en) 2023-05-26

Family

ID=84769697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211437221.6A Active CN115565158B (en) 2022-11-17 2022-11-17 Parking space detection method, device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN115565158B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740682A (en) * 2023-08-14 2023-09-12 禾昆科技(北京)有限公司 Vehicle parking route information generation method, device, electronic equipment and readable medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232583A1 (en) * 2017-02-16 2018-08-16 Honda Motor Co., Ltd. Systems for generating parking maps and methods thereof
CN111508260A (en) * 2019-01-30 2020-08-07 上海欧菲智能车联科技有限公司 Vehicle parking space detection method, device and system
CN111723659A (en) * 2020-05-14 2020-09-29 上海欧菲智能车联科技有限公司 Parking space determining method and device, computer equipment and storage medium
CN112330601A (en) * 2020-10-15 2021-02-05 浙江大华技术股份有限公司 Parking detection method, device, equipment and medium based on fisheye camera
CN112668588A (en) * 2020-12-29 2021-04-16 禾多科技(北京)有限公司 Parking space information generation method, device, equipment and computer readable medium
CN113436461A (en) * 2021-05-31 2021-09-24 荣耀终端有限公司 Method for sending parking space information, vehicle-mounted device and computer-readable storage medium
CN113553881A (en) * 2020-04-23 2021-10-26 华为技术有限公司 Parking space detection method and related device
WO2021226912A1 (en) * 2020-05-14 2021-11-18 上海欧菲智能车联科技有限公司 Parking spot determination method and apparatus, computer device and storage medium
CN113744560A (en) * 2021-09-15 2021-12-03 厦门科拓通讯技术股份有限公司 Automatic parking method and device for parking lot, server and machine-readable storage medium
CN114724107A (en) * 2022-03-21 2022-07-08 北京卓视智通科技有限责任公司 Image detection method, device, equipment and medium
CN114821540A (en) * 2022-05-27 2022-07-29 禾多科技(北京)有限公司 Parking space detection method and device, electronic equipment and computer readable medium
CN114842446A (en) * 2022-04-14 2022-08-02 合众新能源汽车有限公司 Parking space detection method and device and computer storage medium
US20220245952A1 (en) * 2021-02-02 2022-08-04 Nio Technology (Anhui) Co., Ltd Parking spot detection method and parking spot detection system
CN114926416A (en) * 2022-05-06 2022-08-19 广州小鹏自动驾驶科技有限公司 Parking space detection method and device, vehicle and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232583A1 (en) * 2017-02-16 2018-08-16 Honda Motor Co., Ltd. Systems for generating parking maps and methods thereof
CN111508260A (en) * 2019-01-30 2020-08-07 上海欧菲智能车联科技有限公司 Vehicle parking space detection method, device and system
CN113553881A (en) * 2020-04-23 2021-10-26 华为技术有限公司 Parking space detection method and related device
CN111723659A (en) * 2020-05-14 2020-09-29 上海欧菲智能车联科技有限公司 Parking space determining method and device, computer equipment and storage medium
WO2021226912A1 (en) * 2020-05-14 2021-11-18 上海欧菲智能车联科技有限公司 Parking spot determination method and apparatus, computer device and storage medium
CN112330601A (en) * 2020-10-15 2021-02-05 浙江大华技术股份有限公司 Parking detection method, device, equipment and medium based on fisheye camera
WO2022078156A1 (en) * 2020-10-15 2022-04-21 Zhejiang Dahua Technology Co., Ltd. Method and system for parking space management
CN112668588A (en) * 2020-12-29 2021-04-16 禾多科技(北京)有限公司 Parking space information generation method, device, equipment and computer readable medium
US20220245952A1 (en) * 2021-02-02 2022-08-04 Nio Technology (Anhui) Co., Ltd Parking spot detection method and parking spot detection system
CN113436461A (en) * 2021-05-31 2021-09-24 荣耀终端有限公司 Method for sending parking space information, vehicle-mounted device and computer-readable storage medium
CN113744560A (en) * 2021-09-15 2021-12-03 厦门科拓通讯技术股份有限公司 Automatic parking method and device for parking lot, server and machine-readable storage medium
CN114724107A (en) * 2022-03-21 2022-07-08 北京卓视智通科技有限责任公司 Image detection method, device, equipment and medium
CN114842446A (en) * 2022-04-14 2022-08-02 合众新能源汽车有限公司 Parking space detection method and device and computer storage medium
CN114926416A (en) * 2022-05-06 2022-08-19 广州小鹏自动驾驶科技有限公司 Parking space detection method and device, vehicle and storage medium
CN114821540A (en) * 2022-05-27 2022-07-29 禾多科技(北京)有限公司 Parking space detection method and device, electronic equipment and computer readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740682A (en) * 2023-08-14 2023-09-12 禾昆科技(北京)有限公司 Vehicle parking route information generation method, device, electronic equipment and readable medium
CN116740682B (en) * 2023-08-14 2023-10-27 禾昆科技(北京)有限公司 Vehicle parking route information generation method, device, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN115565158B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN112598762B (en) Three-dimensional lane line information generation method, device, electronic device, and medium
CN110852258A (en) Object detection method, device, equipment and storage medium
CN107941226B (en) Method and device for generating a direction guideline for a vehicle
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN112116655A (en) Method and device for determining position information of image of target object
CN115817463B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN114463768A (en) Form recognition method and device, readable medium and electronic equipment
CN116164770B (en) Path planning method, path planning device, electronic equipment and computer readable medium
CN114894205A (en) Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN112598731B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN111967332B (en) Visibility information generation method and device for automatic driving
CN110796144A (en) License plate detection method, device, equipment and storage medium
CN115565158B (en) Parking space detection method, device, electronic equipment and computer readable medium
CN110956128A (en) Method, apparatus, electronic device, and medium for generating lane line image
CN112528970A (en) Guideboard detection method, device, equipment and computer readable medium
CN115546769B (en) Road image recognition method, device, equipment and computer readable medium
CN115610415B (en) Vehicle distance control method, device, electronic equipment and computer readable medium
CN110852242A (en) Watermark identification method, device, equipment and storage medium based on multi-scale network
CN116311152A (en) Evaluation method and device for parking space detection effect and related equipment
CN116543367A (en) Method, device, equipment and medium for generating parking space information based on fisheye camera
CN116563818A (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN115393423A (en) Target detection method and device
CN115408609A (en) Parking route recommendation method and device, electronic equipment and computer readable medium
CN115471477A (en) Scanning data denoising method, scanning device, scanning equipment and medium
CN111383337A (en) Method and device for identifying objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant