WO2021253333A1 - 一种基于屏幕光通信的车辆定位方法、装置及服务器 - Google Patents
一种基于屏幕光通信的车辆定位方法、装置及服务器 Download PDFInfo
- Publication number
- WO2021253333A1 WO2021253333A1 PCT/CN2020/096844 CN2020096844W WO2021253333A1 WO 2021253333 A1 WO2021253333 A1 WO 2021253333A1 CN 2020096844 W CN2020096844 W CN 2020096844W WO 2021253333 A1 WO2021253333 A1 WO 2021253333A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- image
- vehicle
- screen
- identification code
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000004891 communication Methods 0.000 title claims abstract description 57
- 230000003287 optical effect Effects 0.000 title claims abstract description 26
- 238000006243 chemical reaction Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 18
- 238000005259 measurement Methods 0.000 abstract description 11
- 230000007613 environmental effect Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 31
- 238000004422 calculation algorithm Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 15
- 239000011159 matrix material Substances 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 230000001629 suppression Effects 0.000 description 7
- 238000007781 pre-processing Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/955—Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Definitions
- This application relates to the field of positioning technology, and in particular to a vehicle positioning method, device and server based on screen optical communication.
- Vehicle positioning technologies usually include the following: high-precision radar-based positioning methods, lidar-based positioning methods, and camera-based positioning methods.
- the positioning method based on high-precision radar means that by installing one or more high-precision radars on the vehicle, when the high-precision radar sends out ultrasonic pulses, the reflection characteristics of the ultrasonic waves can be used to receive the ultrasonic waves reflected by the surrounding objects, and pass the detection The ultrasound waveform identifies surrounding objects, and then determines the relative position between the vehicle and the object.
- the positioning method based on lidar refers to the installation of lidar on the vehicle, the laser beam is emitted through the lidar, and the laser beam reflected by the surrounding objects is received at the same time, and the laser beam reflected by the surrounding objects is compared with the emitted laser beam. To detect the position and speed of surrounding objects and other characteristic quantities, and then determine the relative position between the vehicle and the object.
- the camera-based positioning method refers to a technical solution that uses a camera for positioning, which can be divided into two methods: monocular camera positioning and multi-camera positioning.
- the principle of monocular camera positioning is mainly that the objects photographed by monocular camera appear close and far small.
- vehicle speed and camera focal length take multiple images of the same object at fixed intervals, and calculate the actual distance between the object and the camera by calculating the size change of the same object on multiple images The distance in turn determines the relative position between the vehicle and the object.
- Binocular or multi-camera positioning is based on the principle of parallax.
- the same object is captured by multiple cameras, the deviation of the same object on multiple images is calculated, and the difference between the object and the camera is calculated based on the deviation and the distance between the multiple cameras.
- the actual distance between the vehicle and the object in turn determines the relative position between the vehicle and the object.
- the high-precision radars on the existing market are relatively expensive, and the lidar is easily affected by weather and the environment when positioning, and monocular or multi-camera cameras can only achieve high-precision positioning in a small range.
- the purpose of the embodiments of the present application is to provide a vehicle positioning method, device, and server based on screen light communication, including but not limited to solving the problems of high cost, low accuracy, small range, or low stability of related vehicle positioning methods.
- a vehicle positioning method based on screen optical communication which is applied to a server, and includes:
- Target image sent by a target vehicle, where the target image includes a target screen image; the target screen image includes at least one identification code;
- the relative position information between the target vehicle and the target screen is determined according to the identification code information.
- a vehicle positioning method based on screen light communication which is applied to a vehicle, and includes:
- the image includes a target screen image, determining that the image is a target image
- the target image is sent to the server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
- a vehicle positioning device based on screen optical communication which is applied to a server, and includes:
- a receiving module configured to receive a target image sent by a target vehicle, the target image includes a target screen image; the target screen image includes at least one identification code;
- An identification module configured to perform image recognition on the identification code to obtain identification code information
- the determining module is used to determine the relative position information between the target vehicle and the target screen according to the identification code information.
- a vehicle positioning device based on screen light communication which is applied to a vehicle, and includes:
- the acquisition module is used to acquire images
- a judging module used for judging that the image is a target image when it is recognized that the image includes a target screen image
- the sending module is configured to send the target image to the server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
- an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and running on the processor.
- the processor executes the computer program when the computer program is executed.
- the vehicle positioning method based on screen optical communication as described in any one of the above-mentioned first aspects.
- an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the implementation is as described in any of the above-mentioned first aspects.
- the described vehicle positioning method based on screen optical communication.
- the embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the vehicle positioning method based on screen light communication according to any one of the above-mentioned first aspects .
- the beneficial effect of the vehicle positioning method based on screen light communication is: by processing the target image of the target screen image sent by the target vehicle, the identification code in the target screen image is identified to obtain the identification code information, according to The identification code information calculates the relative position information between the target vehicle and the target screen, based on the screen light communication between the target screen and the vehicle, realizes a wide range of high-precision vehicle positioning operations, low equipment costs, and reduces environmental factors on the accuracy of ranging The influence of this improves the stability of vehicle positioning.
- FIG. 1 is an architecture diagram of a vehicle positioning system based on screen optical communication provided by an embodiment of the present application
- FIG. 2 is a schematic flowchart of a vehicle positioning method based on screen optical communication provided by an embodiment of the present application
- FIG. 3 is a schematic diagram of an application scenario for performing binarization processing on a target image provided by an embodiment of the present application
- FIG. 4 is a schematic diagram of a target screen of a vehicle positioning method based on screen optical communication provided by another embodiment of the present application.
- FIG. 5 is a schematic diagram of a target screen of a vehicle positioning method based on screen optical communication provided by another embodiment of the present application.
- FIG. 6 is a schematic diagram of a target screen image including a first line segment in a vehicle positioning method based on screen light communication provided by an embodiment of the present application;
- FIG. 7 is a schematic diagram of an application scenario of performing boundary suppression processing on a pre-processed target image in a vehicle positioning method based on screen light communication provided by an embodiment of the present application;
- FIG. 8 is a schematic diagram of an application scenario of performing boundary suppression processing on a pre-processed target image of a vehicle positioning method based on screen light communication provided by an embodiment of the present application;
- FIG. 9 is a schematic diagram of an application scenario of performing boundary suppression processing on a pre-processed target image of a vehicle positioning method based on screen optical communication provided by an embodiment of the present application;
- FIG. 10 is a schematic diagram of an application scenario for determining a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication provided by an embodiment of the present application;
- FIG. 11 is a schematic diagram of a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication according to an embodiment of the present application.
- FIG. 12 is a schematic diagram of an application scenario for determining a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication according to another embodiment of the present application;
- FIG. 13 is a schematic diagram of an application scenario for calculating the actual distance between the target vehicle and the target screen of the vehicle positioning method based on screen optical communication provided by an embodiment of the present application;
- FIG. 14 is a schematic diagram of an application scenario for calculating the deflection angle between the target vehicle and the target screen of the vehicle positioning method based on screen light communication provided by an embodiment of the present application;
- 15 is a schematic diagram of an application scenario of detecting a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication provided by another embodiment of the present application;
- 16 is a schematic flowchart of a vehicle positioning method based on screen optical communication according to another embodiment of the present application.
- FIG. 17 is a schematic structural diagram of a vehicle positioning device based on screen optical communication according to an embodiment of the present application.
- FIG. 18 is a schematic structural diagram of a vehicle positioning device based on screen optical communication according to another embodiment of the present application.
- FIG. 19 is a schematic diagram of the structure of a server provided by an embodiment of the present application.
- the vehicle positioning method based on screen optical communication provided in the embodiments of the present application can be applied to terminal devices such as servers or vehicles, and the embodiments of the present application do not impose any restrictions on the specific types of terminal devices.
- this application proposes a vehicle positioning method based on screen light communication, a vehicle positioning device based on screen light communication, a server and a computer-readable storage medium, which can pass between the vehicle and the screen when the vehicle is driving automatically. Inter-screen optical communication to achieve high-precision vehicle positioning.
- the vehicle positioning system based on screen light communication consists of more than one screen (only one is shown in Figure 1) and one or more autonomous vehicles (only three are shown in Figure 1, such as vehicle a, vehicle b). It is composed of a vehicle c) and a server, and the screen and the self-driving vehicle can realize screen light communication, and the self-driving vehicle and the server can communicate with each other.
- the self-driving vehicle is a vehicle that may have a need for vehicle positioning services to realize automatic driving
- the screen is a positioning device that can provide positioning services.
- an autonomous vehicle When an autonomous vehicle is in the process of autonomous driving, it can be used as an autonomous vehicle to send a target image including a target screen image to the server of a vehicle positioning system based on screen light communication; the server is receiving a target screen image sent by an autonomous vehicle After the target image, the target screen image can be identified to obtain identification code information, and the relative position information between the autonomous driving vehicle and the target screen can be determined according to the identification code information.
- FIG. 2 shows a schematic flowchart of a vehicle positioning method based on screen light communication provided by the present application.
- the method can be applied to the above-mentioned server.
- S101 Receive a target image sent by a target vehicle, where the target image includes a target screen image; and the target screen image includes at least one identification code.
- a target image including a target screen image captured and sent by a camera on the target vehicle is received, where the camera may be a monocular camera.
- multiple screens can be set in each region (or city) in advance, and the number of screens can be specifically set according to actual conditions; for example, 10,000 screens can be set in City A.
- each screen is used to display at least one identification code to provide identification code information.
- the types of screens include, but are not limited to, electronic screens, road signs, or printed matter.
- the identification code can be a two-dimensional code or other image that can be used for positioning and can display the identification code information at the same time.
- the target screen refers to the screen corresponding to the target image.
- the target image taken by the target vehicle may include other object images except the target screen image. Therefore, it is necessary to preprocess the target image, realize image noise reduction processing, and obtain the preprocessed target image to reduce the impact of environmental noise on the accuracy of vehicle positioning, thereby improving the accuracy of vehicle positioning.
- the preprocessing includes but is not limited to at least one of denoising processing and binarization processing.
- the maximum between-class variance (OSTU) algorithm can be used to calculate the conversion threshold T of the binarization process, and set the value of the pixel whose grayscale is greater than T Converted to 255, the value of pixels whose grayscale is less than T is converted to 0; or, the value of pixels whose grayscale is greater than T is converted to 0, and the value of pixels whose grayscale is less than T is converted to 255 , Complete the image binarization processing. It should be noted that the value range of T is 0 ⁇ 255.
- it is set to convert the gray value of a pixel with a gray value greater than T in the target image to 0, and to convert the gray value of a pixel with a gray value less than T in the target image to 255.
- the percentage of the number of pixels of the identification code in the target image to the number of pixels of the target image is represented by ⁇ 0
- the average gray level of the pixels of the identification code is represented by ⁇ 0
- the number of pixels other than the identification code accounts for the target image pixels
- the percentage of the number of dots is represented by ⁇ 1
- the average gray scale of other pixels except the identification code is represented by ⁇ 1.
- the total average gray level of the target image is represented by ⁇
- the variance between classes is represented by g
- the target image is represented by O(x,y)
- (x,y) is the position coordinate of the pixel in the target image
- the target image O(x) is M pixels ⁇ N pixels
- the number of pixels in the target image whose gray value is less than the conversion threshold T is N0
- the number of pixels in the target image whose gray value is greater than the conversion threshold T is N1.
- the size M ⁇ N of the target image O(x, y), the conversion threshold T, the percentage of the number of pixels of the identification code to the number of pixels of the target image ⁇ 0, and the average grayscale of the pixels of the identification code can be obtained ⁇ 0, the percentage of the number of pixels other than the identification code to the number of pixels in the target image ⁇ 1, the average gray level of other pixels except the identification code ⁇ 1, the total average gray level of the target image ⁇ , the inter-class variance g
- the conversion relationship between N0 and N1 is shown in the following formula:
- Fig. 3 exemplarily shows a schematic diagram of an application scenario after the target image is binarized.
- the target screen is an electronic screen
- the electronic screen includes two identification codes
- the identification code is a two-dimensional code
- the target image includes one electronic screen image
- the electronic screen image includes two two-dimensional codes.
- the identification code information is displayed on the identification code, and is used to calculate the relative position information between the target vehicle and the target screen. Based on the difference of the set identification code, the identification code information obtained by the identification can also be different.
- the following is an exemplary description of the identification code information in the target screen image provided by this application in conjunction with Figures 4-5.
- identification code information Including but not limited to the actual size (or actual side length) of the identification code, the display position information of the identification code on the target screen, the identification of the target screen, and the road condition information of the location of the target screen at the current moment, etc.
- each screen is set with a different logo, so the logo of the target screen included in the logo code information can be identified, the screen corresponding to the logo is determined as the target screen, and the target screen is determined according to the logo in the target screen Location information.
- the identification of the target screen included in the identification code information is ID008, it can be determined that the screen of ID008 is the target screen, and the location information of the target screen of ID008 can be obtained at the same time.
- the identification code can be updated at a preset time interval to update the identification code information carried on the identification code, thereby updating the road condition information at the location of the target screen in real time.
- the preset time interval is specifically set according to actual conditions. For example, if the preset time interval is set to 30s, the identification code can be updated every 30s.
- FIG. 4 a schematic diagram of a target screen is exemplarily provided
- the target screen is an electronic screen.
- the electronic screen includes a two-dimensional code.
- the two-dimensional code is symmetric about the center of the electronic screen, and the distance between the four sides of the two-dimensional code and the border of the electronic screen is the same.
- the positioning area of the two-dimensional code can be determined in the target screen image according to the image preprocessing and boundary suppression processing of the two-dimensional code.
- the position of the QR code in the target image is determined, and then the QR code image is intercepted according to the specific position of the QR code in the target image, and the QR code image is sent to the QR code parser through the QR code
- the parser analyzes and recognizes the two-dimensional code to obtain identification code information.
- the target screen may include more than two identification codes.
- the display position information may also include relative position information between the multiple identification codes.
- FIG. 5 a schematic diagram of another target screen is exemplarily provided.
- the target screen is an electronic screen.
- Two QR codes are arranged on the left and right of the electronic screen. The content of the two QR codes is the same.
- the QR code on the left is displayed on the electronic screen by rotating 90° to the right.
- the distance between each QR code and the edge of the electronic screen is a, and the distance between two QR codes is 2a, that is The distance between each two-dimensional code and the edge of the electronic screen is equal to one-half of the distance between two two-dimensional codes.
- the identification code information of each two-dimensional code in Figure 5 should include: the size (or side length) of the two-dimensional code, the display position information of the two-dimensional code and another two-dimensional code on the electronic screen:
- the center of the electronic screen is symmetrical, and the distance between the two-dimensional code and the boundary of the electronic screen is the same as the distance between the other two-dimensional code and the boundary of the electronic screen.
- the distance is twice the distance between the two-dimensional code and the edge of the electronic screen, and the road condition information of the location of the target screen at the current moment.
- S103 Determine relative position information between the target vehicle and the target screen according to the identification code information.
- the relative position information between the target vehicle and the target screen includes the actual distance and deflection angle between the target vehicle and the target screen.
- the corresponding target screen is determined according to the target screen identification, and the actual distance and angle between the target vehicle and the target screen are calculated according to the actual size (side length) of the identification code and the display position information of the identification code in the target screen image.
- the image side length of the preset side in the target screen image is obtained, and the image side length of the preset side in the target screen image can be expressed by the number of pixels, and the image side length of the preset side in the target screen image
- the side length (pixels, px) is converted to the length (cm, cm) of the preset side in the target screen image, and the length (cm, cm) of the preset side in the target screen image and the actual length of the preset side Side length, calculate the actual distance between the target vehicle and the electronic screen.
- the deflection angle between the target vehicle and the target screen includes a horizontal deflection angle and a vertical deflection angle
- the position of the multiple first line segments in the target screen image can be determined according to the display position information of the identification code on the target screen, the image lengths of the multiple first line segments in the target screen image can be calculated, and the image lengths of the multiple first line segments in the target screen image can be calculated according to the multiple first line segments.
- the actual length of the line segment, the image length of the multiple first line segments in the target screen image and the preset conversion coefficient respectively calculate the distance between the multiple first line segments and the target vehicle, according to the multiple first line segments The distance to the target vehicle, calculate the deflection angle between the target vehicle and the target screen.
- the unit of measurement of the length in the image can be represented by pixels (px); therefore, the image side length of the preset side in the target screen image can be represented by the number of pixels of the preset side in the target screen image ,
- the image length difference between the center point pixel of the target image and the center point pixel of the target screen image can be expressed by the difference in the number of the center point pixel of the target image and the center point pixel of the target screen image.
- the multiple first line segments are in the target screen image.
- the image length in can be expressed by the number of pixels in the target screen image of the multiple first line segments; and the actual length of the preset side and the actual length of the first line segment are measured in centimeters (cm), Therefore, when calculating the relative position information between the target vehicle and the target screen, the measurement unit needs to be converted by the preset conversion coefficient between pixels and centimeters, and the pixels (px) are converted to centimeters (cm) to obtain the preset
- the relative position information includes the actual distance between the target vehicle and the target screen
- the identification code information includes the actual side length of the preset side in the identification code
- the step S103 includes:
- the relative position information includes the actual distance between the target vehicle and the target screen.
- the image side length of the preset side in the target screen image can be expressed by the number of pixels of the preset side in the target screen image in the identification code, and the unit of measurement is converted according to the preset conversion coefficient between pixels and centimeters , Obtain the length (cm, cm) of the preset side in the target screen image, and calculate the difference between the target vehicle and the target screen according to the length (cm, cm) of the preset side in the target screen image and the actual side length of the preset side The actual distance between.
- the preset side can be specifically set according to actual conditions. For example, when the identification code is a rectangle, the preset side is set to the height of the identification code, and correspondingly, the height of the identification code included in the identification code information is the actual side length It is the actual side length of the preset side.
- the two-dimensional code is generally square in practical applications, so the preset side can be set to any side of the two-dimensional code; correspondingly, the two-dimensional code included in the identification code information
- the actual side length of the code is the actual side length of the preset side.
- the number of pixels in the target screen image of any side of the QR code can be obtained, and then the actual side length of the QR code and any one of the QR code can be obtained. By calculating the number of pixels in the target screen image, the actual distance between the target vehicle and the target screen is calculated.
- the relative position information includes the deflection angle of the target vehicle relative to the target screen;
- the identification code information includes the actual lengths of a plurality of first line segments preset in the identification code;
- the step S103 includes:
- the image length of the plurality of first line segments in the target screen image the actual length of the plurality of first line segments, and a preset conversion coefficient, the plurality of first line segments and all the first line segments are calculated respectively. State the distance between the target vehicles;
- the deflection angle is determined according to the distance between the plurality of first line segments and the target vehicle.
- the relative position information includes the deflection angle of the target vehicle relative to the target screen, and the deflection angle includes the horizontal deflection angle and the vertical deflection angle.
- the identification code information includes the actual lengths of a plurality of first line segments preset in the identification code; wherein the first line segment is a line segment in the identification code used to determine the deflection angle between the target vehicle and the target screen, and The position of a line segment in the identification code can be specifically set according to actual conditions, and the actual length of the first line segment changes according to its position in the identification code.
- the first line segment is The measurement unit of the image length in the target screen image is usually pixels (px). According to the preset conversion coefficient between pixels and centimeters, the measurement unit is converted to length (centimeters, cm), and each first line segment is The length (cm, cm) in the target screen image is calculated based on the length (cm, cm) of each first line segment in the target screen image and the actual length of each first line segment. The distance between the target vehicles.
- the multiple first line segments should include multiple horizontal line segments and multiple vertical line segments.
- the distance between the multiple first line segments in the target screen image and the target vehicle includes a horizontal distance and a vertical distance, which are respectively used to calculate the vertical deflection angle of the target vehicle relative to the target screen and the target vehicle relative to the target screen The horizontal declination.
- all horizontal distances can be calculated by a preset algorithm to obtain the vertical deflection angle between the target vehicle and the target screen, and all vertical distances can be calculated by a preset algorithm to obtain the distance between the target vehicle and the target screen.
- the preset algorithm includes, but is not limited to, the Music (Multiple Signal Classification algorithm) algorithm.
- the degree of deformation of the plurality of first line segments is determined.
- the deflection angle between the target vehicle and the target screen can be calculated according to the deformation degree of multiple first line segments, and the positioning method based on the multi-camera camera can be simulated by the monocular camera to reduce the accuracy error of the deflection angle. Relying on the image matching algorithm based on multiple images and being less affected by environmental factors, it can realize vehicle positioning in complex situations.
- the actual length of the four equidistant horizontal line segments, the actual length of the four equidistant vertical line segments and the position information of each first line segment in the identification code can be determined according to the image length of the side length of the identification code.
- the identification code is a square image like a two-dimensional code, and the number of pixels on its side is 50, then the spacing between each horizontal line segment can be obtained as 10 pixels, and the spacing between each vertical line segment is 10 pixels, and then determine the position of each horizontal line segment and each vertical line segment in the identification code.
- FIG. 6 a schematic diagram of the first line segment in the target screen image is provided.
- the target screen is an electronic screen
- the corresponding target screen image is an electronic screen image.
- the electronic screen image includes a two-dimensional code; among them, the first line segment is 4 equidistant horizontal line segments on the two-dimensional code and 4 equidistant vertical line segments;
- the target screen is an electronic screen
- the corresponding target screen image is an electronic screen image.
- the electronic screen image includes two identical two-dimensional codes; among them, the first line segment is two equidistant lines on each two-dimensional code The horizontal line segment and 2 equally spaced vertical line segments.
- the relative position information includes a deflection angle of the target vehicle relative to the target screen
- the identification code information includes display position information of the identification code on the target screen
- the step S103 includes:
- the deflection angle is determined based on the difference in image length.
- the relative position information includes the deflection angle of the target vehicle relative to the target screen.
- the target vehicle according to the length difference between the center point of the target image and the center point of the target screen image
- the actual distance to the target screen is calculated to obtain the deflection angle of the target vehicle relative to the target screen.
- the image length difference between the center point of the target image and the center point of the target screen image can be expressed by the difference in the number of pixels; the number of pixels between the center point of the target image and the center point of the target screen image
- the difference includes the difference in the number of horizontal pixels and the difference in the number of vertical pixels.
- it can be based on the difference in the number of horizontal pixels between the center point of the target image and the center point of the target screen image and the target vehicle to the target screen.
- the horizontal deflection angle of the target vehicle relative to the target screen is calculated; according to the difference in the number of vertical pixels between the center point of the target image and the center point of the target screen image and the distance between the target vehicle and the target screen Calculate the vertical deflection angle of the target vehicle relative to the target screen.
- the target screen image is an electronic screen image
- the electronic screen image includes two identification codes
- the identification code is a two-dimensional code as an example.
- FIGS. 7-15 a schematic diagram of an application scenario for calculating the relative position information between a target vehicle and a target screen is provided;
- Figures 7-9 are schematic diagrams of application scenarios for boundary suppression processing on the preprocessed target image.
- the boundary suppression operation includes: taking 8 pixels of any pixel in the image as edge pixels (it should be noted that the edge pixels of the center pixel at the image boundary will be less than 8 ), compare each pixel with the gray value of the edge pixel of the pixel, if the gray value of any pixel of the edge pixel is 0, then the pixel is considered to be a pixel adjacent to the image boundary Point, the gray value of the pixel point is converted to 0.
- each QR code has three positioning areas, and each positioning area is composed of a black frame, a white frame, and a square.
- an image as shown in Figure 7 can be obtained.
- the image leaves many pixel areas displayed by a black frame, a white frame and a square nested display (as shown in Figure 8) and For other pixel areas, convert the gray values of other pixel areas in FIG. 9 to 0 to obtain an image as shown in FIG. 9.
- the pixel area displayed by a black frame, a white frame and a square in nested display in FIG. 7 contains the positioning area of the two-dimensional code.
- Figure 10-12 is a schematic diagram of an application scenario for determining a two-dimensional code location area.
- determining the location area of the two-dimensional code includes:
- the preset marking conditions and preset positioning conditions can be set correspondingly according to the types of identification codes.
- the preset positioning condition is a preset identification condition for determining whether any pixel area in the identification code is a positioning area of the identification code.
- the preset marking condition is set as a pixel area where multiple black borders and a black square are nested and displayed. Then fill in the marked area that meets the preset marking conditions (as shown in Figure 10, convert the pixel gray value of the marked area to 0), traverse and calculate the centroid position of each marked area; and according to the identification code Type, determine the corresponding preset positioning condition, detect the position of the centroid, obtain the marked area where all the centroids that meet the preset positioning condition are located, and determine the positioning area of the identification code.
- Fig. 11 is a schematic diagram of a positioning area of a two-dimensional code.
- the white color block (that is, the part with the pixel value of 1) in the two-dimensional code positioning area is taken as the peak
- the black color block (that is, the part with the pixel value of 0) in the two-dimensional code positioning area as the wave trough.
- it can be determined that the relative widths of the peaks and valleys of each two-dimensional code can be calculated based on the number of pixels with pixel values of 0 and 1 when the vertical line segment passes through the centroid position in the vertical direction.
- the corresponding preset positioning condition can be set as the number of crests is 3, the number of troughs is 2, and the pixel area where the ratio of the width of the crest to the width of the trough meets the preset ratio threshold is the positioning area of the two-dimensional code.
- the gray value of the pixel area where the number of wave crests and/or the number of wave troughs does not meet the preset number can be converted to zero.
- the similarity of the ratio of peaks and troughs in the pixel area can be obtained by calculating the Euclidean distance, and the similarity can be used as the preset ratio threshold.
- the specific algorithm is as follows:
- the preset ratio threshold can be set to 0.8, that is, when the number of peaks in a certain pixel area is 3 and the number of troughs is 2, if the ratio of peaks to troughs in the pixel area of the area is less than 0.8, the area is determined
- the pixel area is the two-dimensional code positioning area; if the ratio of the peak to the trough of the pixel area of the area is greater than 0.8, it is determined that the pixel area of the area is not the two-dimensional code positioning area.
- Fig. 12 includes a pixel area determined to be a two-dimensional code positioning area.
- each QR code After determining the location area of each QR code, you can know from the arrangement of the two QR codes.
- the three location areas of the QR code with the smaller abscissa are the left QR code.
- the three two-dimensional code positioning areas with the larger abscissa are the two-dimensional code positioning areas on the right. All positioning areas of each two-dimensional code are identified to obtain the identification code information of each two-dimensional code.
- FIG. 13 is a schematic diagram of an application scenario for calculating the actual distance between the target vehicle and the target screen
- the length measurement unit in the target image is pixels (px). Therefore, the length measurement unit can be converted into centimeters (cm) through the preset conversion coefficient between pixels and centimeters to obtain the identification code in the target image. The length of the side in the image (cm, cm); then the actual side length of the identification code and the length of the side of the identification code in the target image (cm, cm) are calculated to obtain the actual distance between the target vehicle and the target screen.
- the focal length of the camera is denoted by F
- the actual distance between the target vehicle and the target screen is denoted by Y
- BC is the actual side length value of the two-dimensional code
- DE is the number of pixels of the two-dimensional code on the target screen image. number.
- the conversion relationship between the camera's pixel density PPI, length CM (measured in cm) and the number of pixels PX is as follows:
- PPI is a fixed coefficient, it can be determined in advance or read directly from the camera manual. Therefore, the number of pixels DE of the edge of the two-dimensional code on the target screen image can be used as PX and substituted into formula (8).
- the unit of measurement is converted from pixels to centimeters;
- ⁇ ABC and ⁇ ADE can form a pair of similar triangles.
- the actual distance Y between the target vehicle and the target screen can be obtained.
- the focal length of the lens marked by the commonly used camera is not equal to the actual focal length of the shot, and usually after the image is taken, the camera may do some preprocessing (for example, denoising processing) on the image, so that the acquired camera
- the F value of the focal length has a certain deviation from the actual focal length.
- the embodiment of the present application provides another method for calculating the actual distance between the target vehicle and the target screen, which can avoid the problem of reduced positioning accuracy caused by inaccurate camera parameters:
- the actual side length of the QR code is represented by X.
- the distance between any vehicle and the target screen is Y2
- the number of pixels X2 of the side length of the corresponding QR code in the target screen image is obtained in advance, and the QR code is obtained at the same time
- the conversion factor PPI from pixels to centimeters.
- Y is the linear distance between the target screen and the camera, that is, the actual distance between the target vehicle and the target screen.
- FIG. 14 is a schematic diagram of an application scenario for calculating the deflection angle between the target vehicle and the target screen.
- the horizontal distance between the camera and the center of the target screen is represented by DX
- the vertical distance between the camera and the center of the target screen is represented by DY. Since the two two-dimensional codes are symmetrical about the center of the target screen, and the distance between each two-dimensional code and the edge of the electronic screen is equal to one-half of the distance between the two two-dimensional codes, the two two-dimensional codes in the target image can be determined
- the middle point of the dimension code is the center point of the target screen image; the difference in the number of horizontal pixels from the center point of the target image to the center point of the target screen image is represented by C1, and the difference in the number of vertical pixels is represented by C2.
- a single two on the target image The number of wide pixels of a two-dimensional code is PX, and the number of high pixels of a single two-dimensional code is PY.
- the actual side length of the QR code is represented by L, and the horizontal distance DX and vertical distance DY can be calculated by the following formula:
- the actual distance between the target vehicle and the target screen is Y
- the calculation formula for the horizontal deflection angle between the target vehicle and the target screen is as follows:
- FIG. 15 another application scenario diagram for calculating the deflection angle between the target vehicle and the target screen is provided.
- the QR code calculates the horizontal distance between each preset horizontal line and the target vehicle, and each preset vertical line and the target The vertical distance between vehicles.
- the steps for calculating the horizontal deflection angle through the Music algorithm are as follows: the distance between two preset vertical lines is d, and the incident signal (that is, the input data) of the Music algorithm is constructed with a matrix of the distance d Among them, the intermediate variables Z1, Z2, Z3, Z4 are:
- Y1 represents the estimated distance between the target vehicle and the target screen according to the first vertical line segment (such as the left edge of the QR code on the left in the target screen image);
- Y2 represents the estimated distance between the target vehicle and the target screen according to the second vertical line segment The distance value between the target vehicle and the target screen;
- Y3 represents the estimated distance between the target vehicle and the target screen according to the third vertical line segment;
- Y4 represents the distance value between the target vehicle and the target screen according to the fourth vertical line segment (such as the target screen image The right edge of the QR code on the right) estimates the distance between the target vehicle and the target screen.
- H represents the conjugate transpose of the matrix
- A is the direction response vector
- R is the signal correlation matrix, which is extracted from the input signal S(i);
- ⁇ 2 is the noise power, and
- I is the identity matrix;
- ⁇ is the eigenvalue obtained by decomposition
- ⁇ ( ⁇ ) is the eigenvector corresponding to the eigenvalue ⁇ . Sort according to the size of the eigenvalue ⁇ , take the eigenvector ⁇ ( ⁇ ) corresponding to the largest eigenvalue as the signal part space, and use the other 3 eigenvalues and the corresponding eigenvectors except the largest eigenvalue as the noise part space to obtain the noise matrix E n.
- the calculated horizontal deflection angle P is:
- a represents the signal vector (extracted from S(i)).
- the angle information of the deflection of the camera can be calculated according to the degree of deformation on the image.
- the deformation degree of multiple line segments on the target image is converted into incident signals, which are used as the input value of the Music algorithm to calculate the deflection angle of the camera relative to the center of the target screen, as the target vehicle and the target screen The angle between.
- the deviation of the deflection angle calculated by the Music algorithm is the smallest when the difference in the deformation degree of the multiple first line segments on the identification code is the largest.
- the calculation method is as follows:
- the conversion matrix for setting the camera's shooting is:
- K [ ⁇ -N , ⁇ 1-N , ⁇ 2-N ,... ⁇ 0 ,... ⁇ N-2 , ⁇ N-1 , ⁇ N ] (24);
- K is the distortion matrix of the camera.
- the actual position of the object is not the same as the position in the image.
- the matrix K expresses the conversion relationship between the actual position of the object and the position in the image.
- the image is a two-dimensional matrix, and correspondingly, K is also a two-dimensional matrix.
- the ⁇ in K is a column vector, and ⁇ -N represents the leftmost column vector. It can be understood that ⁇ 1-N represents the second column vector from the left, and ⁇ 2-N represents the third column vector from the left.
- the deflection angle of the camera relative to the center of the target screen can be obtained based on the screen light communication through the Music algorithm calculation, and then obtain the difference between the target vehicle and the target screen.
- the angle of time improves the efficiency and accuracy of calculation.
- step S103 the method further includes:
- the target image containing the target screen image sent by other vehicles is obtained, and the calculation is performed through the above steps S101 to S103 to obtain the second relative position information between the other vehicle and the target screen; it can be understood that the other vehicle and the target screen
- the second relative position information between the target screens includes the distance and the deflection angle between other vehicles and the target screen.
- S105 Determine third relative position information between the target vehicle and the other vehicle according to the relative position information and the second relative position information.
- the third relative position information between the target vehicle and other vehicles includes the distance and angle between the target vehicle and other vehicles. According to the relative position information between the target vehicle and the target screen and the second relative position information between other vehicles and the target screen, the distance and angle between the target vehicle and other vehicles can be calculated to determine the distance between the target vehicle and other vehicles The third relative position information.
- the target screen image includes at least two identical identification codes.
- identification code information of multiple identification codes when shooting a target image with a monocular camera, and perform identification code information of at least two identification codes.
- the vehicle positioning is less affected by environmental factors.
- the identification code information further includes road traffic information of the area where the target screen is located. After the relative position information between the target vehicle and the target screen is determined according to the identification code information, the Methods also include:
- the driving instruction is sent to the target vehicle to control the target vehicle to travel according to the driving instruction.
- obtain the road traffic information of the area where the target screen is located obtain the road traffic information of the area where the target screen is located, analyze the relative position information and road traffic information to determine the road condition information of the location (road) where the target vehicle is located, generate driving instructions corresponding to the target vehicle, and send the driving instructions To the target vehicle, control the target vehicle to drive according to driving instructions.
- the target image of the target screen image sent by the target vehicle is processed, the identification code in the target screen image is recognized to obtain identification code information, and the relative position information between the target vehicle and the target screen is calculated according to the identification code information.
- the equipment cost is low, the influence of environmental factors on the ranging accuracy is reduced, and the stability of the vehicle positioning is improved.
- FIG. 16 shows a schematic flowchart of a vehicle positioning method based on screen light communication provided by the present application. As an example and not a limitation, the method may be applied to a vehicle.
- S203 Send the target image to a server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
- the camera is controlled in real time to take images, and the images are analyzed and recognized.
- the target screen image is included in the image
- the target image is sent to the server so that the server can respond to the target image.
- the identification code in the image recognition is performed to obtain the identification code information, and then the relative position information between the vehicle and the target screen is determined according to the identification code information.
- the image is acquired in real time, and when it is recognized that the image includes the target screen image, the image is sent to the server as the target image, so that the server can determine the relative position information between the vehicle and the target screen based on the target image.
- the screen light communication between the screen and the vehicle realizes a wide range of high-precision vehicle positioning operations, with low equipment costs, and at the same time improves the range and stability of high-precision positioning of the vehicle.
- FIG. 17 shows a structural block diagram of the vehicle positioning device 100 based on screen light communication provided by an embodiment of the present application.
- the positioning device 100 is applied to a server, and for ease of description, only the parts related to the embodiment of the present application are shown.
- the vehicle positioning device 100 based on screen light communication includes:
- the receiving module 101 is configured to receive a target image sent by a target vehicle, where the target image includes a target screen image; the target screen image includes at least one identification code;
- the identification module 102 is configured to perform image recognition on the identification code to obtain identification code information
- the determining module 103 is configured to determine the relative position information between the target vehicle and the target screen according to the identification code information
- the device 100 further includes:
- the obtaining module 104 is configured to obtain second relative position information between other vehicles and the target electronics
- the second determining module 105 is configured to determine third relative position information between the target vehicle and the other vehicles according to the relative position information and the second relative position information.
- the relative position information includes the actual distance between the target vehicle and the target screen
- the identification code information includes the actual side length of the preset side in the identification code
- the determining module 103 includes:
- the first determining unit 1031 is configured to determine the image side length of the preset side in the target screen image
- the first calculation unit 1032 is configured to calculate the actual distance according to the actual side length of the preset side, the image side length of the preset side in the target screen image, and a preset conversion coefficient.
- the relative position information includes a deflection angle of the target vehicle relative to the target screen
- the identification code information includes display position information of the identification code on the target screen
- the determination Module 103 includes:
- the second determining unit 1033 is configured to determine the center point of the target screen image according to the display position information
- the second calculation unit 1034 is configured to calculate the image length difference between the center point of the target image and the center point of the target screen image
- the third determining unit 1035 is configured to determine the deflection angle according to the image length difference.
- the relative position information includes the deflection angle of the target vehicle relative to the target screen;
- the identification code information includes the actual lengths of a plurality of first line segments preset in the identification code;
- the determining module 103 includes:
- the fourth determining unit 1036 is configured to determine the image length of the multiple first line segments in the target screen image
- the third calculation unit 1037 is configured to calculate the image lengths of the plurality of first line segments in the target screen image, the actual lengths of the plurality of first line segments, and a preset conversion coefficient, respectively, to calculate the The distance between the plurality of first line segments and the target vehicle;
- the fifth determining unit 1038 is configured to determine the deflection angle according to the distance between the plurality of first line segments and the target vehicle.
- the identification code information further includes road traffic information of the area where the target screen is located, and the device 100 further includes:
- a generating module configured to generate a driving instruction corresponding to the target vehicle according to the relative position information and the road traffic information, the driving instruction including the driving speed and the driving direction;
- the sending module is used to send the driving instruction to the target vehicle to control the target vehicle to drive according to the driving instruction.
- the target image of the target screen image sent by the target vehicle is processed, the identification code in the target screen image is recognized to obtain identification code information, and the relative position information between the target vehicle and the target screen is calculated according to the identification code information.
- the equipment cost is low, the influence of environmental factors on the ranging accuracy is reduced, and the stability of the vehicle positioning is improved.
- FIG. 18 shows a structural block diagram of a vehicle positioning device 200 based on screen light communication provided by an embodiment of the present application.
- the positioning device 200 is applied to a vehicle, and for ease of description, only the parts related to the embodiment of the present application are shown.
- the vehicle positioning device 200 based on screen light communication includes:
- the obtaining module 201 is used to obtain an image
- the determining module 202 is configured to determine that the image is a target image when it is recognized that the image includes a target screen image;
- the sending module 203 is configured to send the target image to the server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
- the image is acquired in real time, and when it is recognized that the image includes the target screen image, the image is sent to the server as the target image, so that the server can determine the relative position information between the vehicle and the target screen based on the target image.
- the screen light communication between the screen and the vehicle realizes a wide range of high-precision vehicle positioning operations, with low equipment costs, and at the same time improves the range and stability of high-precision positioning of the vehicle.
- FIG. 19 is a schematic structural diagram of a server provided by an embodiment of this application.
- the server 19 of this embodiment includes: at least one processor 190 (only one is shown in FIG. 19), a processor, a memory 191, and a memory 191 stored in the memory 191 and available in the at least one processor.
- the server 19 may be a computing device such as a cloud server.
- the server may include, but is not limited to, a processor 190 and a memory 191.
- FIG. 19 is only an example of the server 19, and does not constitute a limitation on the server 19. It may include more or fewer components than shown, or a combination of certain components, or different components, such as It can also include input and output devices, network access devices, and so on.
- the so-called processor 190 may be a central processing unit (Central Processing Unit, CPU), and the processor 190 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the storage 191 may be an internal storage unit of the server 19 in some embodiments, such as a hard disk or a memory of the server 19.
- the memory 191 may also be an external storage device of the server 19, such as a plug-in hard disk equipped on the server 19, a smart media card (SMC), or a secure digital card ( Secure Digital, SD), Flash Card, etc.
- the storage 191 may also include both an internal storage unit of the server 19 and an external storage device.
- the memory 191 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program.
- the memory 191 can also be used to temporarily store data that has been output or will be output.
- An embodiment of the present application also provides a server, which includes: at least one processor, a memory, and a computer program stored in the memory and running on the at least one processor, and the processor executes the The computer program implements the steps in any of the foregoing method embodiments.
- the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
- the embodiments of the present application provide a computer program product.
- the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (7)
- 一种基于屏幕光通信的车辆定位方法,其特征在于,应用于服务器,包括:接收目标车辆发送的目标图像,所述目标图像包括目标屏幕图像;所述目标屏幕图像包括至少一个标识码;对所述标识码进行图像识别,获得标识码信息;根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息。
- 根据权利要求1所述的方法,其特征在于,所述相对位置信息包括所述目标车辆和所述目标屏幕之间的实际距离,所述标识码信息包括所述标识码中的预设边的实际边长;所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息,包括:确定所述预设边在所述目标屏幕图像中的像边长;根据所述预设边的实际边长、所述预设边在所述目标屏幕图像中的像边长和预设的转换系数,计算所述实际距离。
- 根据权利要求1所述的方法,其特征在于,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角度,所述标识码信息包括所述标识码在所述目标屏幕中的显示位置信息,所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息,包括:根据所述显示位置信息确定所述目标屏幕图像的中心点;计算所述目标图像的中心点到所述目标屏幕图像的中心点之间的像长度差;根据所述像长度差确定所述偏转角度。
- 根据权利要求1所述的方法,其特征在于,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角度;所述标识码信息包括所述标识码中预设的多个第一线段的实际长度;所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息,包括:确定所述多个第一线段在所述目标屏幕图像中的像长度;根据所述多个第一线段在所述目标屏幕图像中的像长度、所述多个第一线段的实际长度和预设的转换系数,分别计算所述多个第一线段与所述目标车辆之间的距离;根据所述多个第一线段与所述目标车辆之间的距离,确定所述偏转角度。
- 根据权利要求1-4任一项所述的方法,其特征在于,所述标识码信息还包括所述目标屏幕所在区域的道路交通信息,所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息之后,所述方法还包括:根据所述相对位置信息和所述道路交通信息,生成与目标车辆对应的行车指令,所述行车指令包括行车速度和行车方向;发送所述行车指令至所述目标车辆,以控制所述目标车辆根据所述行车指令行驶。
- 一种基于屏幕光通信的车辆定位方法,其特征在于,应用于车辆,包括:获取图像;在识别到所述图像包括目标屏幕图像时,判定所述图像为目标图像;发送所述目标图像至服务器,以使所述服务器根据所述目标图像确定车辆与目标屏幕之间的相对位置信息。
- 一种服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至5任一项,或6所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/096844 WO2021253333A1 (zh) | 2020-06-18 | 2020-06-18 | 一种基于屏幕光通信的车辆定位方法、装置及服务器 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/096844 WO2021253333A1 (zh) | 2020-06-18 | 2020-06-18 | 一种基于屏幕光通信的车辆定位方法、装置及服务器 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021253333A1 true WO2021253333A1 (zh) | 2021-12-23 |
Family
ID=79269084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/096844 WO2021253333A1 (zh) | 2020-06-18 | 2020-06-18 | 一种基于屏幕光通信的车辆定位方法、装置及服务器 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021253333A1 (zh) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104637330A (zh) * | 2015-02-15 | 2015-05-20 | 国家电网公司 | 基于视频二维码的车载导航通讯系统及防超速方法 |
CN107871395A (zh) * | 2016-09-23 | 2018-04-03 | 孙少东 | 智能交通避拥堵系统 |
CN108663054A (zh) * | 2017-03-28 | 2018-10-16 | 茹景阳 | 一种车辆定位的方法及其装置 |
CN108810811A (zh) * | 2018-08-12 | 2018-11-13 | 苏州鑫丰恒富科技有限公司 | 一种大型车库wifi指纹库的创建和更新系统及其方法 |
KR102009352B1 (ko) * | 2018-03-21 | 2019-10-21 | 주식회사 메이플테크 | IoT 디바이스를 사용한 식물 생장 조건에 따른 배양 용기로 구성된 시스템 |
CN110515464A (zh) * | 2019-08-28 | 2019-11-29 | 百度在线网络技术(北京)有限公司 | Ar显示方法、装置、车辆和存储介质 |
CN110675627A (zh) * | 2019-09-30 | 2020-01-10 | 山东科技大学 | 一种基于二维码识别的交通信息获取方法与系统 |
CN110992723A (zh) * | 2019-12-27 | 2020-04-10 | 魏贞民 | 一种无人驾驶交通导航信号设备和其管理系统 |
-
2020
- 2020-06-18 WO PCT/CN2020/096844 patent/WO2021253333A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104637330A (zh) * | 2015-02-15 | 2015-05-20 | 国家电网公司 | 基于视频二维码的车载导航通讯系统及防超速方法 |
CN107871395A (zh) * | 2016-09-23 | 2018-04-03 | 孙少东 | 智能交通避拥堵系统 |
CN108663054A (zh) * | 2017-03-28 | 2018-10-16 | 茹景阳 | 一种车辆定位的方法及其装置 |
KR102009352B1 (ko) * | 2018-03-21 | 2019-10-21 | 주식회사 메이플테크 | IoT 디바이스를 사용한 식물 생장 조건에 따른 배양 용기로 구성된 시스템 |
CN108810811A (zh) * | 2018-08-12 | 2018-11-13 | 苏州鑫丰恒富科技有限公司 | 一种大型车库wifi指纹库的创建和更新系统及其方法 |
CN110515464A (zh) * | 2019-08-28 | 2019-11-29 | 百度在线网络技术(北京)有限公司 | Ar显示方法、装置、车辆和存储介质 |
CN110675627A (zh) * | 2019-09-30 | 2020-01-10 | 山东科技大学 | 一种基于二维码识别的交通信息获取方法与系统 |
CN110992723A (zh) * | 2019-12-27 | 2020-04-10 | 魏贞民 | 一种无人驾驶交通导航信号设备和其管理系统 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10964054B2 (en) | Method and device for positioning | |
CN110322500B (zh) | 即时定位与地图构建的优化方法及装置、介质和电子设备 | |
US20180189577A1 (en) | Systems and methods for lane-marker detection | |
WO2020108311A1 (zh) | 目标对象3d检测方法、装置、介质及设备 | |
US11694445B2 (en) | Obstacle three-dimensional position acquisition method and apparatus for roadside computing device | |
US11783507B2 (en) | Camera calibration apparatus and operating method | |
WO2022179566A1 (zh) | 外参标定方法、装置、电子设备及存储介质 | |
CN113762003B (zh) | 一种目标对象的检测方法、装置、设备和存储介质 | |
CN112947419B (zh) | 避障方法、装置及设备 | |
CN109828250B (zh) | 一种雷达标定方法、标定装置及终端设备 | |
KR101772438B1 (ko) | 도로 표지판 인식 시스템에서 막대형 신호를 검출하는 장치 및 방법 | |
US10679090B2 (en) | Method for estimating 6-DOF relative displacement using vision-based localization and apparatus therefor | |
CN114399675A (zh) | 一种基于机器视觉与激光雷达融合的目标检测方法和装置 | |
CN113034586B (zh) | 道路倾角检测方法和检测系统 | |
CN111862208B (zh) | 一种基于屏幕光通信的车辆定位方法、装置及服务器 | |
CN114662600A (zh) | 一种车道线的检测方法、装置和存储介质 | |
WO2021253333A1 (zh) | 一种基于屏幕光通信的车辆定位方法、装置及服务器 | |
CN113112551B (zh) | 相机参数的确定方法、装置、路侧设备和云控平台 | |
US20160379087A1 (en) | Method for determining a similarity value between a first image and a second image | |
CN113763457B (zh) | 落差地形的标定方法、装置、电子设备和存储介质 | |
CN114638947A (zh) | 数据标注方法、装置、电子设备及存储介质 | |
CN104236518B (zh) | 一种基于光学成像与模式识别的天线主波束指向探测方法 | |
CN117677862A (zh) | 一种伪像点识别方法、终端设备及计算机可读存储介质 | |
JP7064400B2 (ja) | 物体検知装置 | |
Fei et al. | Obstacle Detection for Agricultural Machinery Vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20941451 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20941451 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20941451 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED D09/08/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20941451 Country of ref document: EP Kind code of ref document: A1 |