WO2021253333A1 - 一种基于屏幕光通信的车辆定位方法、装置及服务器 - Google Patents

一种基于屏幕光通信的车辆定位方法、装置及服务器 Download PDF

Info

Publication number
WO2021253333A1
WO2021253333A1 PCT/CN2020/096844 CN2020096844W WO2021253333A1 WO 2021253333 A1 WO2021253333 A1 WO 2021253333A1 CN 2020096844 W CN2020096844 W CN 2020096844W WO 2021253333 A1 WO2021253333 A1 WO 2021253333A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
vehicle
screen
identification code
Prior art date
Application number
PCT/CN2020/096844
Other languages
English (en)
French (fr)
Inventor
赵毓斌
文考
须成忠
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2020/096844 priority Critical patent/WO2021253333A1/zh
Publication of WO2021253333A1 publication Critical patent/WO2021253333A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • This application relates to the field of positioning technology, and in particular to a vehicle positioning method, device and server based on screen optical communication.
  • Vehicle positioning technologies usually include the following: high-precision radar-based positioning methods, lidar-based positioning methods, and camera-based positioning methods.
  • the positioning method based on high-precision radar means that by installing one or more high-precision radars on the vehicle, when the high-precision radar sends out ultrasonic pulses, the reflection characteristics of the ultrasonic waves can be used to receive the ultrasonic waves reflected by the surrounding objects, and pass the detection The ultrasound waveform identifies surrounding objects, and then determines the relative position between the vehicle and the object.
  • the positioning method based on lidar refers to the installation of lidar on the vehicle, the laser beam is emitted through the lidar, and the laser beam reflected by the surrounding objects is received at the same time, and the laser beam reflected by the surrounding objects is compared with the emitted laser beam. To detect the position and speed of surrounding objects and other characteristic quantities, and then determine the relative position between the vehicle and the object.
  • the camera-based positioning method refers to a technical solution that uses a camera for positioning, which can be divided into two methods: monocular camera positioning and multi-camera positioning.
  • the principle of monocular camera positioning is mainly that the objects photographed by monocular camera appear close and far small.
  • vehicle speed and camera focal length take multiple images of the same object at fixed intervals, and calculate the actual distance between the object and the camera by calculating the size change of the same object on multiple images The distance in turn determines the relative position between the vehicle and the object.
  • Binocular or multi-camera positioning is based on the principle of parallax.
  • the same object is captured by multiple cameras, the deviation of the same object on multiple images is calculated, and the difference between the object and the camera is calculated based on the deviation and the distance between the multiple cameras.
  • the actual distance between the vehicle and the object in turn determines the relative position between the vehicle and the object.
  • the high-precision radars on the existing market are relatively expensive, and the lidar is easily affected by weather and the environment when positioning, and monocular or multi-camera cameras can only achieve high-precision positioning in a small range.
  • the purpose of the embodiments of the present application is to provide a vehicle positioning method, device, and server based on screen light communication, including but not limited to solving the problems of high cost, low accuracy, small range, or low stability of related vehicle positioning methods.
  • a vehicle positioning method based on screen optical communication which is applied to a server, and includes:
  • Target image sent by a target vehicle, where the target image includes a target screen image; the target screen image includes at least one identification code;
  • the relative position information between the target vehicle and the target screen is determined according to the identification code information.
  • a vehicle positioning method based on screen light communication which is applied to a vehicle, and includes:
  • the image includes a target screen image, determining that the image is a target image
  • the target image is sent to the server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
  • a vehicle positioning device based on screen optical communication which is applied to a server, and includes:
  • a receiving module configured to receive a target image sent by a target vehicle, the target image includes a target screen image; the target screen image includes at least one identification code;
  • An identification module configured to perform image recognition on the identification code to obtain identification code information
  • the determining module is used to determine the relative position information between the target vehicle and the target screen according to the identification code information.
  • a vehicle positioning device based on screen light communication which is applied to a vehicle, and includes:
  • the acquisition module is used to acquire images
  • a judging module used for judging that the image is a target image when it is recognized that the image includes a target screen image
  • the sending module is configured to send the target image to the server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
  • an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program when the computer program is executed.
  • the vehicle positioning method based on screen optical communication as described in any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the implementation is as described in any of the above-mentioned first aspects.
  • the described vehicle positioning method based on screen optical communication.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the vehicle positioning method based on screen light communication according to any one of the above-mentioned first aspects .
  • the beneficial effect of the vehicle positioning method based on screen light communication is: by processing the target image of the target screen image sent by the target vehicle, the identification code in the target screen image is identified to obtain the identification code information, according to The identification code information calculates the relative position information between the target vehicle and the target screen, based on the screen light communication between the target screen and the vehicle, realizes a wide range of high-precision vehicle positioning operations, low equipment costs, and reduces environmental factors on the accuracy of ranging The influence of this improves the stability of vehicle positioning.
  • FIG. 1 is an architecture diagram of a vehicle positioning system based on screen optical communication provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a vehicle positioning method based on screen optical communication provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an application scenario for performing binarization processing on a target image provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of a target screen of a vehicle positioning method based on screen optical communication provided by another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a target screen of a vehicle positioning method based on screen optical communication provided by another embodiment of the present application.
  • FIG. 6 is a schematic diagram of a target screen image including a first line segment in a vehicle positioning method based on screen light communication provided by an embodiment of the present application;
  • FIG. 7 is a schematic diagram of an application scenario of performing boundary suppression processing on a pre-processed target image in a vehicle positioning method based on screen light communication provided by an embodiment of the present application;
  • FIG. 8 is a schematic diagram of an application scenario of performing boundary suppression processing on a pre-processed target image of a vehicle positioning method based on screen light communication provided by an embodiment of the present application;
  • FIG. 9 is a schematic diagram of an application scenario of performing boundary suppression processing on a pre-processed target image of a vehicle positioning method based on screen optical communication provided by an embodiment of the present application;
  • FIG. 10 is a schematic diagram of an application scenario for determining a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication provided by an embodiment of the present application;
  • FIG. 11 is a schematic diagram of a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an application scenario for determining a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication according to another embodiment of the present application;
  • FIG. 13 is a schematic diagram of an application scenario for calculating the actual distance between the target vehicle and the target screen of the vehicle positioning method based on screen optical communication provided by an embodiment of the present application;
  • FIG. 14 is a schematic diagram of an application scenario for calculating the deflection angle between the target vehicle and the target screen of the vehicle positioning method based on screen light communication provided by an embodiment of the present application;
  • 15 is a schematic diagram of an application scenario of detecting a two-dimensional code positioning area of a vehicle positioning method based on screen optical communication provided by another embodiment of the present application;
  • 16 is a schematic flowchart of a vehicle positioning method based on screen optical communication according to another embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a vehicle positioning device based on screen optical communication according to an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a vehicle positioning device based on screen optical communication according to another embodiment of the present application.
  • FIG. 19 is a schematic diagram of the structure of a server provided by an embodiment of the present application.
  • the vehicle positioning method based on screen optical communication provided in the embodiments of the present application can be applied to terminal devices such as servers or vehicles, and the embodiments of the present application do not impose any restrictions on the specific types of terminal devices.
  • this application proposes a vehicle positioning method based on screen light communication, a vehicle positioning device based on screen light communication, a server and a computer-readable storage medium, which can pass between the vehicle and the screen when the vehicle is driving automatically. Inter-screen optical communication to achieve high-precision vehicle positioning.
  • the vehicle positioning system based on screen light communication consists of more than one screen (only one is shown in Figure 1) and one or more autonomous vehicles (only three are shown in Figure 1, such as vehicle a, vehicle b). It is composed of a vehicle c) and a server, and the screen and the self-driving vehicle can realize screen light communication, and the self-driving vehicle and the server can communicate with each other.
  • the self-driving vehicle is a vehicle that may have a need for vehicle positioning services to realize automatic driving
  • the screen is a positioning device that can provide positioning services.
  • an autonomous vehicle When an autonomous vehicle is in the process of autonomous driving, it can be used as an autonomous vehicle to send a target image including a target screen image to the server of a vehicle positioning system based on screen light communication; the server is receiving a target screen image sent by an autonomous vehicle After the target image, the target screen image can be identified to obtain identification code information, and the relative position information between the autonomous driving vehicle and the target screen can be determined according to the identification code information.
  • FIG. 2 shows a schematic flowchart of a vehicle positioning method based on screen light communication provided by the present application.
  • the method can be applied to the above-mentioned server.
  • S101 Receive a target image sent by a target vehicle, where the target image includes a target screen image; and the target screen image includes at least one identification code.
  • a target image including a target screen image captured and sent by a camera on the target vehicle is received, where the camera may be a monocular camera.
  • multiple screens can be set in each region (or city) in advance, and the number of screens can be specifically set according to actual conditions; for example, 10,000 screens can be set in City A.
  • each screen is used to display at least one identification code to provide identification code information.
  • the types of screens include, but are not limited to, electronic screens, road signs, or printed matter.
  • the identification code can be a two-dimensional code or other image that can be used for positioning and can display the identification code information at the same time.
  • the target screen refers to the screen corresponding to the target image.
  • the target image taken by the target vehicle may include other object images except the target screen image. Therefore, it is necessary to preprocess the target image, realize image noise reduction processing, and obtain the preprocessed target image to reduce the impact of environmental noise on the accuracy of vehicle positioning, thereby improving the accuracy of vehicle positioning.
  • the preprocessing includes but is not limited to at least one of denoising processing and binarization processing.
  • the maximum between-class variance (OSTU) algorithm can be used to calculate the conversion threshold T of the binarization process, and set the value of the pixel whose grayscale is greater than T Converted to 255, the value of pixels whose grayscale is less than T is converted to 0; or, the value of pixels whose grayscale is greater than T is converted to 0, and the value of pixels whose grayscale is less than T is converted to 255 , Complete the image binarization processing. It should be noted that the value range of T is 0 ⁇ 255.
  • it is set to convert the gray value of a pixel with a gray value greater than T in the target image to 0, and to convert the gray value of a pixel with a gray value less than T in the target image to 255.
  • the percentage of the number of pixels of the identification code in the target image to the number of pixels of the target image is represented by ⁇ 0
  • the average gray level of the pixels of the identification code is represented by ⁇ 0
  • the number of pixels other than the identification code accounts for the target image pixels
  • the percentage of the number of dots is represented by ⁇ 1
  • the average gray scale of other pixels except the identification code is represented by ⁇ 1.
  • the total average gray level of the target image is represented by ⁇
  • the variance between classes is represented by g
  • the target image is represented by O(x,y)
  • (x,y) is the position coordinate of the pixel in the target image
  • the target image O(x) is M pixels ⁇ N pixels
  • the number of pixels in the target image whose gray value is less than the conversion threshold T is N0
  • the number of pixels in the target image whose gray value is greater than the conversion threshold T is N1.
  • the size M ⁇ N of the target image O(x, y), the conversion threshold T, the percentage of the number of pixels of the identification code to the number of pixels of the target image ⁇ 0, and the average grayscale of the pixels of the identification code can be obtained ⁇ 0, the percentage of the number of pixels other than the identification code to the number of pixels in the target image ⁇ 1, the average gray level of other pixels except the identification code ⁇ 1, the total average gray level of the target image ⁇ , the inter-class variance g
  • the conversion relationship between N0 and N1 is shown in the following formula:
  • Fig. 3 exemplarily shows a schematic diagram of an application scenario after the target image is binarized.
  • the target screen is an electronic screen
  • the electronic screen includes two identification codes
  • the identification code is a two-dimensional code
  • the target image includes one electronic screen image
  • the electronic screen image includes two two-dimensional codes.
  • the identification code information is displayed on the identification code, and is used to calculate the relative position information between the target vehicle and the target screen. Based on the difference of the set identification code, the identification code information obtained by the identification can also be different.
  • the following is an exemplary description of the identification code information in the target screen image provided by this application in conjunction with Figures 4-5.
  • identification code information Including but not limited to the actual size (or actual side length) of the identification code, the display position information of the identification code on the target screen, the identification of the target screen, and the road condition information of the location of the target screen at the current moment, etc.
  • each screen is set with a different logo, so the logo of the target screen included in the logo code information can be identified, the screen corresponding to the logo is determined as the target screen, and the target screen is determined according to the logo in the target screen Location information.
  • the identification of the target screen included in the identification code information is ID008, it can be determined that the screen of ID008 is the target screen, and the location information of the target screen of ID008 can be obtained at the same time.
  • the identification code can be updated at a preset time interval to update the identification code information carried on the identification code, thereby updating the road condition information at the location of the target screen in real time.
  • the preset time interval is specifically set according to actual conditions. For example, if the preset time interval is set to 30s, the identification code can be updated every 30s.
  • FIG. 4 a schematic diagram of a target screen is exemplarily provided
  • the target screen is an electronic screen.
  • the electronic screen includes a two-dimensional code.
  • the two-dimensional code is symmetric about the center of the electronic screen, and the distance between the four sides of the two-dimensional code and the border of the electronic screen is the same.
  • the positioning area of the two-dimensional code can be determined in the target screen image according to the image preprocessing and boundary suppression processing of the two-dimensional code.
  • the position of the QR code in the target image is determined, and then the QR code image is intercepted according to the specific position of the QR code in the target image, and the QR code image is sent to the QR code parser through the QR code
  • the parser analyzes and recognizes the two-dimensional code to obtain identification code information.
  • the target screen may include more than two identification codes.
  • the display position information may also include relative position information between the multiple identification codes.
  • FIG. 5 a schematic diagram of another target screen is exemplarily provided.
  • the target screen is an electronic screen.
  • Two QR codes are arranged on the left and right of the electronic screen. The content of the two QR codes is the same.
  • the QR code on the left is displayed on the electronic screen by rotating 90° to the right.
  • the distance between each QR code and the edge of the electronic screen is a, and the distance between two QR codes is 2a, that is The distance between each two-dimensional code and the edge of the electronic screen is equal to one-half of the distance between two two-dimensional codes.
  • the identification code information of each two-dimensional code in Figure 5 should include: the size (or side length) of the two-dimensional code, the display position information of the two-dimensional code and another two-dimensional code on the electronic screen:
  • the center of the electronic screen is symmetrical, and the distance between the two-dimensional code and the boundary of the electronic screen is the same as the distance between the other two-dimensional code and the boundary of the electronic screen.
  • the distance is twice the distance between the two-dimensional code and the edge of the electronic screen, and the road condition information of the location of the target screen at the current moment.
  • S103 Determine relative position information between the target vehicle and the target screen according to the identification code information.
  • the relative position information between the target vehicle and the target screen includes the actual distance and deflection angle between the target vehicle and the target screen.
  • the corresponding target screen is determined according to the target screen identification, and the actual distance and angle between the target vehicle and the target screen are calculated according to the actual size (side length) of the identification code and the display position information of the identification code in the target screen image.
  • the image side length of the preset side in the target screen image is obtained, and the image side length of the preset side in the target screen image can be expressed by the number of pixels, and the image side length of the preset side in the target screen image
  • the side length (pixels, px) is converted to the length (cm, cm) of the preset side in the target screen image, and the length (cm, cm) of the preset side in the target screen image and the actual length of the preset side Side length, calculate the actual distance between the target vehicle and the electronic screen.
  • the deflection angle between the target vehicle and the target screen includes a horizontal deflection angle and a vertical deflection angle
  • the position of the multiple first line segments in the target screen image can be determined according to the display position information of the identification code on the target screen, the image lengths of the multiple first line segments in the target screen image can be calculated, and the image lengths of the multiple first line segments in the target screen image can be calculated according to the multiple first line segments.
  • the actual length of the line segment, the image length of the multiple first line segments in the target screen image and the preset conversion coefficient respectively calculate the distance between the multiple first line segments and the target vehicle, according to the multiple first line segments The distance to the target vehicle, calculate the deflection angle between the target vehicle and the target screen.
  • the unit of measurement of the length in the image can be represented by pixels (px); therefore, the image side length of the preset side in the target screen image can be represented by the number of pixels of the preset side in the target screen image ,
  • the image length difference between the center point pixel of the target image and the center point pixel of the target screen image can be expressed by the difference in the number of the center point pixel of the target image and the center point pixel of the target screen image.
  • the multiple first line segments are in the target screen image.
  • the image length in can be expressed by the number of pixels in the target screen image of the multiple first line segments; and the actual length of the preset side and the actual length of the first line segment are measured in centimeters (cm), Therefore, when calculating the relative position information between the target vehicle and the target screen, the measurement unit needs to be converted by the preset conversion coefficient between pixels and centimeters, and the pixels (px) are converted to centimeters (cm) to obtain the preset
  • the relative position information includes the actual distance between the target vehicle and the target screen
  • the identification code information includes the actual side length of the preset side in the identification code
  • the step S103 includes:
  • the relative position information includes the actual distance between the target vehicle and the target screen.
  • the image side length of the preset side in the target screen image can be expressed by the number of pixels of the preset side in the target screen image in the identification code, and the unit of measurement is converted according to the preset conversion coefficient between pixels and centimeters , Obtain the length (cm, cm) of the preset side in the target screen image, and calculate the difference between the target vehicle and the target screen according to the length (cm, cm) of the preset side in the target screen image and the actual side length of the preset side The actual distance between.
  • the preset side can be specifically set according to actual conditions. For example, when the identification code is a rectangle, the preset side is set to the height of the identification code, and correspondingly, the height of the identification code included in the identification code information is the actual side length It is the actual side length of the preset side.
  • the two-dimensional code is generally square in practical applications, so the preset side can be set to any side of the two-dimensional code; correspondingly, the two-dimensional code included in the identification code information
  • the actual side length of the code is the actual side length of the preset side.
  • the number of pixels in the target screen image of any side of the QR code can be obtained, and then the actual side length of the QR code and any one of the QR code can be obtained. By calculating the number of pixels in the target screen image, the actual distance between the target vehicle and the target screen is calculated.
  • the relative position information includes the deflection angle of the target vehicle relative to the target screen;
  • the identification code information includes the actual lengths of a plurality of first line segments preset in the identification code;
  • the step S103 includes:
  • the image length of the plurality of first line segments in the target screen image the actual length of the plurality of first line segments, and a preset conversion coefficient, the plurality of first line segments and all the first line segments are calculated respectively. State the distance between the target vehicles;
  • the deflection angle is determined according to the distance between the plurality of first line segments and the target vehicle.
  • the relative position information includes the deflection angle of the target vehicle relative to the target screen, and the deflection angle includes the horizontal deflection angle and the vertical deflection angle.
  • the identification code information includes the actual lengths of a plurality of first line segments preset in the identification code; wherein the first line segment is a line segment in the identification code used to determine the deflection angle between the target vehicle and the target screen, and The position of a line segment in the identification code can be specifically set according to actual conditions, and the actual length of the first line segment changes according to its position in the identification code.
  • the first line segment is The measurement unit of the image length in the target screen image is usually pixels (px). According to the preset conversion coefficient between pixels and centimeters, the measurement unit is converted to length (centimeters, cm), and each first line segment is The length (cm, cm) in the target screen image is calculated based on the length (cm, cm) of each first line segment in the target screen image and the actual length of each first line segment. The distance between the target vehicles.
  • the multiple first line segments should include multiple horizontal line segments and multiple vertical line segments.
  • the distance between the multiple first line segments in the target screen image and the target vehicle includes a horizontal distance and a vertical distance, which are respectively used to calculate the vertical deflection angle of the target vehicle relative to the target screen and the target vehicle relative to the target screen The horizontal declination.
  • all horizontal distances can be calculated by a preset algorithm to obtain the vertical deflection angle between the target vehicle and the target screen, and all vertical distances can be calculated by a preset algorithm to obtain the distance between the target vehicle and the target screen.
  • the preset algorithm includes, but is not limited to, the Music (Multiple Signal Classification algorithm) algorithm.
  • the degree of deformation of the plurality of first line segments is determined.
  • the deflection angle between the target vehicle and the target screen can be calculated according to the deformation degree of multiple first line segments, and the positioning method based on the multi-camera camera can be simulated by the monocular camera to reduce the accuracy error of the deflection angle. Relying on the image matching algorithm based on multiple images and being less affected by environmental factors, it can realize vehicle positioning in complex situations.
  • the actual length of the four equidistant horizontal line segments, the actual length of the four equidistant vertical line segments and the position information of each first line segment in the identification code can be determined according to the image length of the side length of the identification code.
  • the identification code is a square image like a two-dimensional code, and the number of pixels on its side is 50, then the spacing between each horizontal line segment can be obtained as 10 pixels, and the spacing between each vertical line segment is 10 pixels, and then determine the position of each horizontal line segment and each vertical line segment in the identification code.
  • FIG. 6 a schematic diagram of the first line segment in the target screen image is provided.
  • the target screen is an electronic screen
  • the corresponding target screen image is an electronic screen image.
  • the electronic screen image includes a two-dimensional code; among them, the first line segment is 4 equidistant horizontal line segments on the two-dimensional code and 4 equidistant vertical line segments;
  • the target screen is an electronic screen
  • the corresponding target screen image is an electronic screen image.
  • the electronic screen image includes two identical two-dimensional codes; among them, the first line segment is two equidistant lines on each two-dimensional code The horizontal line segment and 2 equally spaced vertical line segments.
  • the relative position information includes a deflection angle of the target vehicle relative to the target screen
  • the identification code information includes display position information of the identification code on the target screen
  • the step S103 includes:
  • the deflection angle is determined based on the difference in image length.
  • the relative position information includes the deflection angle of the target vehicle relative to the target screen.
  • the target vehicle according to the length difference between the center point of the target image and the center point of the target screen image
  • the actual distance to the target screen is calculated to obtain the deflection angle of the target vehicle relative to the target screen.
  • the image length difference between the center point of the target image and the center point of the target screen image can be expressed by the difference in the number of pixels; the number of pixels between the center point of the target image and the center point of the target screen image
  • the difference includes the difference in the number of horizontal pixels and the difference in the number of vertical pixels.
  • it can be based on the difference in the number of horizontal pixels between the center point of the target image and the center point of the target screen image and the target vehicle to the target screen.
  • the horizontal deflection angle of the target vehicle relative to the target screen is calculated; according to the difference in the number of vertical pixels between the center point of the target image and the center point of the target screen image and the distance between the target vehicle and the target screen Calculate the vertical deflection angle of the target vehicle relative to the target screen.
  • the target screen image is an electronic screen image
  • the electronic screen image includes two identification codes
  • the identification code is a two-dimensional code as an example.
  • FIGS. 7-15 a schematic diagram of an application scenario for calculating the relative position information between a target vehicle and a target screen is provided;
  • Figures 7-9 are schematic diagrams of application scenarios for boundary suppression processing on the preprocessed target image.
  • the boundary suppression operation includes: taking 8 pixels of any pixel in the image as edge pixels (it should be noted that the edge pixels of the center pixel at the image boundary will be less than 8 ), compare each pixel with the gray value of the edge pixel of the pixel, if the gray value of any pixel of the edge pixel is 0, then the pixel is considered to be a pixel adjacent to the image boundary Point, the gray value of the pixel point is converted to 0.
  • each QR code has three positioning areas, and each positioning area is composed of a black frame, a white frame, and a square.
  • an image as shown in Figure 7 can be obtained.
  • the image leaves many pixel areas displayed by a black frame, a white frame and a square nested display (as shown in Figure 8) and For other pixel areas, convert the gray values of other pixel areas in FIG. 9 to 0 to obtain an image as shown in FIG. 9.
  • the pixel area displayed by a black frame, a white frame and a square in nested display in FIG. 7 contains the positioning area of the two-dimensional code.
  • Figure 10-12 is a schematic diagram of an application scenario for determining a two-dimensional code location area.
  • determining the location area of the two-dimensional code includes:
  • the preset marking conditions and preset positioning conditions can be set correspondingly according to the types of identification codes.
  • the preset positioning condition is a preset identification condition for determining whether any pixel area in the identification code is a positioning area of the identification code.
  • the preset marking condition is set as a pixel area where multiple black borders and a black square are nested and displayed. Then fill in the marked area that meets the preset marking conditions (as shown in Figure 10, convert the pixel gray value of the marked area to 0), traverse and calculate the centroid position of each marked area; and according to the identification code Type, determine the corresponding preset positioning condition, detect the position of the centroid, obtain the marked area where all the centroids that meet the preset positioning condition are located, and determine the positioning area of the identification code.
  • Fig. 11 is a schematic diagram of a positioning area of a two-dimensional code.
  • the white color block (that is, the part with the pixel value of 1) in the two-dimensional code positioning area is taken as the peak
  • the black color block (that is, the part with the pixel value of 0) in the two-dimensional code positioning area as the wave trough.
  • it can be determined that the relative widths of the peaks and valleys of each two-dimensional code can be calculated based on the number of pixels with pixel values of 0 and 1 when the vertical line segment passes through the centroid position in the vertical direction.
  • the corresponding preset positioning condition can be set as the number of crests is 3, the number of troughs is 2, and the pixel area where the ratio of the width of the crest to the width of the trough meets the preset ratio threshold is the positioning area of the two-dimensional code.
  • the gray value of the pixel area where the number of wave crests and/or the number of wave troughs does not meet the preset number can be converted to zero.
  • the similarity of the ratio of peaks and troughs in the pixel area can be obtained by calculating the Euclidean distance, and the similarity can be used as the preset ratio threshold.
  • the specific algorithm is as follows:
  • the preset ratio threshold can be set to 0.8, that is, when the number of peaks in a certain pixel area is 3 and the number of troughs is 2, if the ratio of peaks to troughs in the pixel area of the area is less than 0.8, the area is determined
  • the pixel area is the two-dimensional code positioning area; if the ratio of the peak to the trough of the pixel area of the area is greater than 0.8, it is determined that the pixel area of the area is not the two-dimensional code positioning area.
  • Fig. 12 includes a pixel area determined to be a two-dimensional code positioning area.
  • each QR code After determining the location area of each QR code, you can know from the arrangement of the two QR codes.
  • the three location areas of the QR code with the smaller abscissa are the left QR code.
  • the three two-dimensional code positioning areas with the larger abscissa are the two-dimensional code positioning areas on the right. All positioning areas of each two-dimensional code are identified to obtain the identification code information of each two-dimensional code.
  • FIG. 13 is a schematic diagram of an application scenario for calculating the actual distance between the target vehicle and the target screen
  • the length measurement unit in the target image is pixels (px). Therefore, the length measurement unit can be converted into centimeters (cm) through the preset conversion coefficient between pixels and centimeters to obtain the identification code in the target image. The length of the side in the image (cm, cm); then the actual side length of the identification code and the length of the side of the identification code in the target image (cm, cm) are calculated to obtain the actual distance between the target vehicle and the target screen.
  • the focal length of the camera is denoted by F
  • the actual distance between the target vehicle and the target screen is denoted by Y
  • BC is the actual side length value of the two-dimensional code
  • DE is the number of pixels of the two-dimensional code on the target screen image. number.
  • the conversion relationship between the camera's pixel density PPI, length CM (measured in cm) and the number of pixels PX is as follows:
  • PPI is a fixed coefficient, it can be determined in advance or read directly from the camera manual. Therefore, the number of pixels DE of the edge of the two-dimensional code on the target screen image can be used as PX and substituted into formula (8).
  • the unit of measurement is converted from pixels to centimeters;
  • ⁇ ABC and ⁇ ADE can form a pair of similar triangles.
  • the actual distance Y between the target vehicle and the target screen can be obtained.
  • the focal length of the lens marked by the commonly used camera is not equal to the actual focal length of the shot, and usually after the image is taken, the camera may do some preprocessing (for example, denoising processing) on the image, so that the acquired camera
  • the F value of the focal length has a certain deviation from the actual focal length.
  • the embodiment of the present application provides another method for calculating the actual distance between the target vehicle and the target screen, which can avoid the problem of reduced positioning accuracy caused by inaccurate camera parameters:
  • the actual side length of the QR code is represented by X.
  • the distance between any vehicle and the target screen is Y2
  • the number of pixels X2 of the side length of the corresponding QR code in the target screen image is obtained in advance, and the QR code is obtained at the same time
  • the conversion factor PPI from pixels to centimeters.
  • Y is the linear distance between the target screen and the camera, that is, the actual distance between the target vehicle and the target screen.
  • FIG. 14 is a schematic diagram of an application scenario for calculating the deflection angle between the target vehicle and the target screen.
  • the horizontal distance between the camera and the center of the target screen is represented by DX
  • the vertical distance between the camera and the center of the target screen is represented by DY. Since the two two-dimensional codes are symmetrical about the center of the target screen, and the distance between each two-dimensional code and the edge of the electronic screen is equal to one-half of the distance between the two two-dimensional codes, the two two-dimensional codes in the target image can be determined
  • the middle point of the dimension code is the center point of the target screen image; the difference in the number of horizontal pixels from the center point of the target image to the center point of the target screen image is represented by C1, and the difference in the number of vertical pixels is represented by C2.
  • a single two on the target image The number of wide pixels of a two-dimensional code is PX, and the number of high pixels of a single two-dimensional code is PY.
  • the actual side length of the QR code is represented by L, and the horizontal distance DX and vertical distance DY can be calculated by the following formula:
  • the actual distance between the target vehicle and the target screen is Y
  • the calculation formula for the horizontal deflection angle between the target vehicle and the target screen is as follows:
  • FIG. 15 another application scenario diagram for calculating the deflection angle between the target vehicle and the target screen is provided.
  • the QR code calculates the horizontal distance between each preset horizontal line and the target vehicle, and each preset vertical line and the target The vertical distance between vehicles.
  • the steps for calculating the horizontal deflection angle through the Music algorithm are as follows: the distance between two preset vertical lines is d, and the incident signal (that is, the input data) of the Music algorithm is constructed with a matrix of the distance d Among them, the intermediate variables Z1, Z2, Z3, Z4 are:
  • Y1 represents the estimated distance between the target vehicle and the target screen according to the first vertical line segment (such as the left edge of the QR code on the left in the target screen image);
  • Y2 represents the estimated distance between the target vehicle and the target screen according to the second vertical line segment The distance value between the target vehicle and the target screen;
  • Y3 represents the estimated distance between the target vehicle and the target screen according to the third vertical line segment;
  • Y4 represents the distance value between the target vehicle and the target screen according to the fourth vertical line segment (such as the target screen image The right edge of the QR code on the right) estimates the distance between the target vehicle and the target screen.
  • H represents the conjugate transpose of the matrix
  • A is the direction response vector
  • R is the signal correlation matrix, which is extracted from the input signal S(i);
  • ⁇ 2 is the noise power, and
  • I is the identity matrix;
  • is the eigenvalue obtained by decomposition
  • ⁇ ( ⁇ ) is the eigenvector corresponding to the eigenvalue ⁇ . Sort according to the size of the eigenvalue ⁇ , take the eigenvector ⁇ ( ⁇ ) corresponding to the largest eigenvalue as the signal part space, and use the other 3 eigenvalues and the corresponding eigenvectors except the largest eigenvalue as the noise part space to obtain the noise matrix E n.
  • the calculated horizontal deflection angle P is:
  • a represents the signal vector (extracted from S(i)).
  • the angle information of the deflection of the camera can be calculated according to the degree of deformation on the image.
  • the deformation degree of multiple line segments on the target image is converted into incident signals, which are used as the input value of the Music algorithm to calculate the deflection angle of the camera relative to the center of the target screen, as the target vehicle and the target screen The angle between.
  • the deviation of the deflection angle calculated by the Music algorithm is the smallest when the difference in the deformation degree of the multiple first line segments on the identification code is the largest.
  • the calculation method is as follows:
  • the conversion matrix for setting the camera's shooting is:
  • K [ ⁇ -N , ⁇ 1-N , ⁇ 2-N ,... ⁇ 0 ,... ⁇ N-2 , ⁇ N-1 , ⁇ N ] (24);
  • K is the distortion matrix of the camera.
  • the actual position of the object is not the same as the position in the image.
  • the matrix K expresses the conversion relationship between the actual position of the object and the position in the image.
  • the image is a two-dimensional matrix, and correspondingly, K is also a two-dimensional matrix.
  • the ⁇ in K is a column vector, and ⁇ -N represents the leftmost column vector. It can be understood that ⁇ 1-N represents the second column vector from the left, and ⁇ 2-N represents the third column vector from the left.
  • the deflection angle of the camera relative to the center of the target screen can be obtained based on the screen light communication through the Music algorithm calculation, and then obtain the difference between the target vehicle and the target screen.
  • the angle of time improves the efficiency and accuracy of calculation.
  • step S103 the method further includes:
  • the target image containing the target screen image sent by other vehicles is obtained, and the calculation is performed through the above steps S101 to S103 to obtain the second relative position information between the other vehicle and the target screen; it can be understood that the other vehicle and the target screen
  • the second relative position information between the target screens includes the distance and the deflection angle between other vehicles and the target screen.
  • S105 Determine third relative position information between the target vehicle and the other vehicle according to the relative position information and the second relative position information.
  • the third relative position information between the target vehicle and other vehicles includes the distance and angle between the target vehicle and other vehicles. According to the relative position information between the target vehicle and the target screen and the second relative position information between other vehicles and the target screen, the distance and angle between the target vehicle and other vehicles can be calculated to determine the distance between the target vehicle and other vehicles The third relative position information.
  • the target screen image includes at least two identical identification codes.
  • identification code information of multiple identification codes when shooting a target image with a monocular camera, and perform identification code information of at least two identification codes.
  • the vehicle positioning is less affected by environmental factors.
  • the identification code information further includes road traffic information of the area where the target screen is located. After the relative position information between the target vehicle and the target screen is determined according to the identification code information, the Methods also include:
  • the driving instruction is sent to the target vehicle to control the target vehicle to travel according to the driving instruction.
  • obtain the road traffic information of the area where the target screen is located obtain the road traffic information of the area where the target screen is located, analyze the relative position information and road traffic information to determine the road condition information of the location (road) where the target vehicle is located, generate driving instructions corresponding to the target vehicle, and send the driving instructions To the target vehicle, control the target vehicle to drive according to driving instructions.
  • the target image of the target screen image sent by the target vehicle is processed, the identification code in the target screen image is recognized to obtain identification code information, and the relative position information between the target vehicle and the target screen is calculated according to the identification code information.
  • the equipment cost is low, the influence of environmental factors on the ranging accuracy is reduced, and the stability of the vehicle positioning is improved.
  • FIG. 16 shows a schematic flowchart of a vehicle positioning method based on screen light communication provided by the present application. As an example and not a limitation, the method may be applied to a vehicle.
  • S203 Send the target image to a server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
  • the camera is controlled in real time to take images, and the images are analyzed and recognized.
  • the target screen image is included in the image
  • the target image is sent to the server so that the server can respond to the target image.
  • the identification code in the image recognition is performed to obtain the identification code information, and then the relative position information between the vehicle and the target screen is determined according to the identification code information.
  • the image is acquired in real time, and when it is recognized that the image includes the target screen image, the image is sent to the server as the target image, so that the server can determine the relative position information between the vehicle and the target screen based on the target image.
  • the screen light communication between the screen and the vehicle realizes a wide range of high-precision vehicle positioning operations, with low equipment costs, and at the same time improves the range and stability of high-precision positioning of the vehicle.
  • FIG. 17 shows a structural block diagram of the vehicle positioning device 100 based on screen light communication provided by an embodiment of the present application.
  • the positioning device 100 is applied to a server, and for ease of description, only the parts related to the embodiment of the present application are shown.
  • the vehicle positioning device 100 based on screen light communication includes:
  • the receiving module 101 is configured to receive a target image sent by a target vehicle, where the target image includes a target screen image; the target screen image includes at least one identification code;
  • the identification module 102 is configured to perform image recognition on the identification code to obtain identification code information
  • the determining module 103 is configured to determine the relative position information between the target vehicle and the target screen according to the identification code information
  • the device 100 further includes:
  • the obtaining module 104 is configured to obtain second relative position information between other vehicles and the target electronics
  • the second determining module 105 is configured to determine third relative position information between the target vehicle and the other vehicles according to the relative position information and the second relative position information.
  • the relative position information includes the actual distance between the target vehicle and the target screen
  • the identification code information includes the actual side length of the preset side in the identification code
  • the determining module 103 includes:
  • the first determining unit 1031 is configured to determine the image side length of the preset side in the target screen image
  • the first calculation unit 1032 is configured to calculate the actual distance according to the actual side length of the preset side, the image side length of the preset side in the target screen image, and a preset conversion coefficient.
  • the relative position information includes a deflection angle of the target vehicle relative to the target screen
  • the identification code information includes display position information of the identification code on the target screen
  • the determination Module 103 includes:
  • the second determining unit 1033 is configured to determine the center point of the target screen image according to the display position information
  • the second calculation unit 1034 is configured to calculate the image length difference between the center point of the target image and the center point of the target screen image
  • the third determining unit 1035 is configured to determine the deflection angle according to the image length difference.
  • the relative position information includes the deflection angle of the target vehicle relative to the target screen;
  • the identification code information includes the actual lengths of a plurality of first line segments preset in the identification code;
  • the determining module 103 includes:
  • the fourth determining unit 1036 is configured to determine the image length of the multiple first line segments in the target screen image
  • the third calculation unit 1037 is configured to calculate the image lengths of the plurality of first line segments in the target screen image, the actual lengths of the plurality of first line segments, and a preset conversion coefficient, respectively, to calculate the The distance between the plurality of first line segments and the target vehicle;
  • the fifth determining unit 1038 is configured to determine the deflection angle according to the distance between the plurality of first line segments and the target vehicle.
  • the identification code information further includes road traffic information of the area where the target screen is located, and the device 100 further includes:
  • a generating module configured to generate a driving instruction corresponding to the target vehicle according to the relative position information and the road traffic information, the driving instruction including the driving speed and the driving direction;
  • the sending module is used to send the driving instruction to the target vehicle to control the target vehicle to drive according to the driving instruction.
  • the target image of the target screen image sent by the target vehicle is processed, the identification code in the target screen image is recognized to obtain identification code information, and the relative position information between the target vehicle and the target screen is calculated according to the identification code information.
  • the equipment cost is low, the influence of environmental factors on the ranging accuracy is reduced, and the stability of the vehicle positioning is improved.
  • FIG. 18 shows a structural block diagram of a vehicle positioning device 200 based on screen light communication provided by an embodiment of the present application.
  • the positioning device 200 is applied to a vehicle, and for ease of description, only the parts related to the embodiment of the present application are shown.
  • the vehicle positioning device 200 based on screen light communication includes:
  • the obtaining module 201 is used to obtain an image
  • the determining module 202 is configured to determine that the image is a target image when it is recognized that the image includes a target screen image;
  • the sending module 203 is configured to send the target image to the server, so that the server determines the relative position information between the vehicle and the target screen according to the target image.
  • the image is acquired in real time, and when it is recognized that the image includes the target screen image, the image is sent to the server as the target image, so that the server can determine the relative position information between the vehicle and the target screen based on the target image.
  • the screen light communication between the screen and the vehicle realizes a wide range of high-precision vehicle positioning operations, with low equipment costs, and at the same time improves the range and stability of high-precision positioning of the vehicle.
  • FIG. 19 is a schematic structural diagram of a server provided by an embodiment of this application.
  • the server 19 of this embodiment includes: at least one processor 190 (only one is shown in FIG. 19), a processor, a memory 191, and a memory 191 stored in the memory 191 and available in the at least one processor.
  • the server 19 may be a computing device such as a cloud server.
  • the server may include, but is not limited to, a processor 190 and a memory 191.
  • FIG. 19 is only an example of the server 19, and does not constitute a limitation on the server 19. It may include more or fewer components than shown, or a combination of certain components, or different components, such as It can also include input and output devices, network access devices, and so on.
  • the so-called processor 190 may be a central processing unit (Central Processing Unit, CPU), and the processor 190 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the storage 191 may be an internal storage unit of the server 19 in some embodiments, such as a hard disk or a memory of the server 19.
  • the memory 191 may also be an external storage device of the server 19, such as a plug-in hard disk equipped on the server 19, a smart media card (SMC), or a secure digital card ( Secure Digital, SD), Flash Card, etc.
  • the storage 191 may also include both an internal storage unit of the server 19 and an external storage device.
  • the memory 191 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program.
  • the memory 191 can also be used to temporarily store data that has been output or will be output.
  • An embodiment of the present application also provides a server, which includes: at least one processor, a memory, and a computer program stored in the memory and running on the at least one processor, and the processor executes the The computer program implements the steps in any of the foregoing method embodiments.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于屏幕光通信的车辆定位方法、装置及服务器,该方法包括:接收目标车辆发送的包含目标屏幕图像的目标图像(S101);对目标屏幕图像中的标识码进行识别,获得标识码信息(S102);根据标识码信息计算目标车辆与目标屏幕之间的相对位置信息(S103)。通过目标屏幕与车辆之间的屏幕光通信实现大范围高精度的车辆定位操作,设备成本低,减小了环境因素对测距精度的影响,提高了车辆定位的稳定性。

Description

一种基于屏幕光通信的车辆定位方法、装置及服务器 技术领域
本申请涉及定位技术领域,具体涉及一种基于屏幕光通信的车辆定位方法、装置及服务器。
背景技术
近年来,自动驾驶技术发展迅速,实现自动驾驶技术的过程中,如何进行车辆定位是其中一项重要研究内容。
车辆的定位技术通常包括以下几种:基于高精度雷达的定位方法、基于激光雷达的定位方法以及基于摄像头的定位方法。
其中,基于高精度雷达的定位方法是指通过在车辆上安装一个或多个高精度雷达,当高精度雷达发出超声波脉冲时,可利用超声波的反射特性接收周围物体反射回的超声波,并通过检测超声波的波形识别周边物体,进而确定车辆和物体之间的相对位置。
基于激光雷达的定位方法是指在车辆上安装激光雷达,通过激光雷达发射激光束,同时接收周围物体反射回的激光束,并将接收周围物体反射回的激光束与发射的激光束进行比较,来探测周围物体的位置、速度等特征量,进而确定车辆和物体之间的相对位置。
基于摄像头的定位方法是指利用摄像头进行定位的技术方案,其可划分为单目摄像头定位和多目摄像头定位两种方法。
单目摄像头定位的原理主要是单目摄像头拍摄的物体呈现近大远小。在已知车辆的车速和摄像头焦距的情况下,在固定间隔的时间下拍摄同一物体的多张图像,通过计算同一物体在多张图像上的尺寸变化,来计算出物体与摄像头之间的实际距离,进而确定车辆和物体之间的相对位置。
双目或多目摄像头定位则是利用视差的原理进行定位。在已知多个摄像头之间的间距的情况下,通过多个摄像头拍摄同一物体,计算同一个物体在多张图像上存在的偏差,根据偏差和多个摄像头之间的间距计算出物体和摄像头之间的实际距离,进而确定车辆和物体之间的相对位置。
然而,现有市场上的高精度雷达价格较昂贵,激光雷达定位时易受到天气和环境的影响,并且单目或多目摄像头仅能在范围较小的情况下实现高精度定位。
因此,相关的车辆定位方法分别具有成本高昂、精度低、范围小或者稳定性低的问题,从而无法广泛普及。
申请内容
本申请实施例的目的在于:提供一种基于屏幕光通信的车辆定位方法、装置及服务器,包括但不限于解决相关的车辆定位方法的成本高昂、精度低、范围小或者稳定性低的问题。
本申请实施例采用的技术方案是:
第一方面,提供了一种基于屏幕光通信的车辆定位方法,应用于服务器,包括:
接收目标车辆发送的目标图像,所述目标图像包括目标屏幕图像;所述目标屏幕图像包括至少一个标识码;
对所述标识码进行图像识别,获得标识码信息;
根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息。
第二方面,提供了一种基于屏幕光通信的车辆定位方法,应用于车辆,包括:
获取图像;
在识别到所述图像包括目标屏幕图像时,判定所述图像为目标图像;
发送所述目标图像至服务器,以使所述服务器根据所述目标图像确定车辆与目标屏幕之间的相对位置信息。
第三方面,提供了一种基于屏幕光通信的车辆定位装置,应用于服务器,包括:
接收模块,用于接收目标车辆发送的目标图像,所述目标图像包括目标屏幕图像;所述目标屏幕图像包括至少一个标识码;
识别模块,用于对所述标识码进行图像识别,获得标识码信息;
确定模块,用于根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息。
第四方面,提供了一种基于屏幕光通信的车辆定位装置,应用于车辆,包括:
获取模块,用于获取图像;
判断模块,用于在识别到所述图像包括目标屏幕图像时,判定所述图像为目标图像;
发送模块,用于发送所述目标图像至服务器,以使所述服务器根据所述目标图像确定车辆与目标屏幕之间的相对位置信息。
第五方面,本申请实施例提供了一种服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面中任一项所述的基于屏幕光通信的车辆定位方法。
第六方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面中任一项所述的 基于屏幕光通信的车辆定位方法。
第七方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述的基于屏幕光通信的车辆定位方法。
本申请实施例提供的基于屏幕光通信的车辆定位方法的有益效果在于:通过对目标车辆发送的目标屏幕图像的目标图像进行处理,对目标屏幕图像中的标识码进行识别获得标识码信息,根据标识码信息计算目标车辆与目标屏幕之间的相对位置信息,基于目标屏幕与车辆之间的屏幕光通信实现大范围高精度的车辆定位操作,设备成本低,减小了环境因素对测距精度的影响,提高了车辆定位的稳定性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1是本申请一实施例提供的基于屏幕光通信的车辆定位系统的架构图;
图2是本申请一实施例提供的基于屏幕光通信的车辆定位方法的流程示意图;
图3是本申请一实施例提供的对目标图像进行二值化处理的应用场景示意图;
图4是本申请另一实施例提供的基于屏幕光通信的车辆定位方法的目标屏幕的示意图;
图5是本申请另一实施例提供的基于屏幕光通信的车辆定位方法的目标屏幕的示意图;
图6是本申请一实施例提供的基于屏幕光通信的车辆定位方法的目标屏幕图像中包括第一线段的示意图;
图7是本申请一实施例提供的基于屏幕光通信的车辆定位方法的对预处理后的目标图像进行边界抑制处理的应用场景示意图;
图8是本申请一实施例提供的基于屏幕光通信的车辆定位方法的对预处理后的目标图像进行边界抑制处理的应用场景示意图;
图9是本申请一实施例提供的基于屏幕光通信的车辆定位方法的对预处理后的目标图像进行边界抑制处理的应用场景示意图;
图10是本申请一实施例提供的基于屏幕光通信的车辆定位方法的确定二维码定位区 的应用场景示意图;
图11是本申请一实施例提供的基于屏幕光通信的车辆定位方法的二维码的定位区的示意图;
图12是本申请另一实施例提供的基于屏幕光通信的车辆定位方法的确定二维码定位区的应用场景示意图;
图13是本申请一实施例提供的基于屏幕光通信的车辆定位方法的计算目标车辆与目标屏幕之间的实际距离的应用场景示意图;
图14是本申请一实施例提供的基于屏幕光通信的车辆定位方法的计算目标车辆与目标屏幕之间的偏转角度的应用场景示意图;
图15是本申请另一实施例提供的基于屏幕光通信的车辆定位方法的检测二维码定位区域应用场景示意图;
图16是本申请另一实施例提供的基于屏幕光通信的车辆定位方法的流程示意图;
图17是本申请一实施例提供的基于屏幕光通信的车辆定位装置的结构示意图;
图18是本申请另一实施例提供的基于屏幕光通信的车辆定位装置的结构示意图;
图19是本申请实施例提供的服务器的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本申请。
需说明的是,术语“第一”、“第二”仅用于便于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明技术特征的数量。“多个”的含义是两个或两个以上,除非另有明确具体的限定。
本申请实施例提供的基于屏幕光通信的车辆定位方法可以应用于服务器或车辆等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
近年来,虽然自动驾驶技术已经实现了一定的发展已得到了广泛推广,但是现有的车辆定位技术的定位设备仍未覆盖至各个区域,在一定程度上导致了车辆定位精度不高的问题。为解决这一问题,本申请提出了一种基于屏幕光通信的车辆定位方法、基于屏幕光通信的车辆定位装置、服务器及计算机可读存储介质,可在车辆自动驾驶时,通过车辆与屏幕之间的屏幕光通信,实现高精度的车辆定位。
为实现本申请所提出的技术方案,可先构建一基于屏幕光通信的车辆定位系统。请参 阅图1,该基于屏幕光通信的车辆定位系统由一个以上屏幕(图1中仅示出1个)、一个以上自动驾驶车辆(图1中仅示出3个,如车辆a、车辆b和车辆c)及服务器构成,且屏幕与自动驾驶车辆可实现屏幕光通信,自动驾驶车辆与服务器之间通信连接。
其中,自动驾驶车辆为可能存在车辆定位服务需求,以实现自动驾驶的车辆,屏幕为能够提供定位服务的定位设备。在自动驾驶车辆处于自动驾驶过程中,其可作为自动驾驶车辆向基于屏幕光通信的车辆定位系统的服务器发送包括目标屏幕图像的目标图像;服务器在接收到某自动驾驶车辆所发送的目标屏幕图像的目标图像后,可以对目标屏幕图像进行识别,获得标识码信息,根据标识码信息确定该自动驾驶车辆与目标屏幕之间的相对位置信息。
为了说明本申请所提出的技术方案,下面通过具体实施例来进行说明。
图2示出了本申请提供的基于屏幕光通信的车辆定位方法的示意性流程图,作为示例而非限定,该方法可以应用于上述服务器中。
S101、接收目标车辆发送的目标图像,所述目标图像包括目标屏幕图像;所述目标屏幕图像包括至少一个标识码。
在具体应用中,接收目标车辆上通过摄像头拍摄并发送的包括目标屏幕图像的目标图像,其中,摄像头可以是单目摄像头。
在具体应用中,可预先在每个地区(或城市)设定多个屏幕,屏幕的个数可根据实际情况进行具体设定;例如,在A城设定10000个屏幕。
其中,每个屏幕用于显示至少一个标识码,以提供标识码信息。屏幕的类型包括但不限于电子屏幕、路标或印刷品等。标识码可以是二维码或其他可以用于进行定位、同时可显示标识码信息的图像。目标屏幕是指与目标图像对应的屏幕。
需要说明的是,由于目标车辆拍摄的目标图像中可能包括除目标屏幕图像以外的其他物体图像。因此,需要对目标图像进行预处理,实现图像降噪处理,获得预处理后的目标图像,以减小环境噪声对车辆定位的精度产生的影响,从而提高车辆定位的精度。其中,预处理包括但不限于去噪处理和二值化处理中的至少一种。
在具体应用中,对目标图像进行二值化处理时,可通过最大类间方差法(OSTU)算法计算获得二值化处理的转换阈值T,设定将像素灰度大于T的像素点的值转换为255,像素灰度小于T的像素点的值转换为0;或者,设定将像素灰度大于T的像素点的值转换为0,像素灰度小于T的像素点的值转换为255,完成对图像的二值化处理。需要说明的是,T的取值范围为0~255。
在本申请实施例中,设定为将目标图像中灰度值大于T的像素点的灰度值转换为0, 目标图像中灰度值小于T的像素点的灰度值转换为255。
其中,根据最大类间方差法(OSTU)算法计算获得转换阈值T的步骤如下:
目标图像中标识码的像素点个数占目标图像像素点个数的百分比以ω0表示,标识码的像素点的平均灰度以μ0表示,除标识码以外的其他像素点个数占目标图像像素点个数的百分比以ω1表示,除标识码以外的其他像素点的平均灰度以μ1表示。目标图像的总平均灰度以μ表示,类间方差以g表示,目标图像以O(x,y)表示,(x,y)为像素点在目标图像中的位置坐标,目标图像O(x,y)的尺寸为M个像素×N个像素,目标图像中像素的灰度值小于转换阈值T的像素个数为N0,目标图像中像素灰度大于转换阈值T的像素个数为N1。
对应的,可以获得目标图像O(x,y)的尺寸M×N、转换阈值T、标识码的像素点个数占目标图像像素点个数的百分比ω0、标识码的像素点的平均灰度μ0、除标识码以外的其他像素点个数占目标图像像素点个数的百分比ω1、除标识码以外的其他像素点的平均灰度μ1、目标图像的总平均灰度μ、类间方差g、N0和N1之间的转换关系,如下式所示:
Figure PCTCN2020096844-appb-000001
Figure PCTCN2020096844-appb-000002
N0+N1=M×N  (3);
ω0+ω1=1  (4);
μ=ω0×μ0+ω1×μ1  (5);
g=ω0(μ0-μ) 2+ω1(μ1-μ) 2  (6);
通过对上述公式转换,可得到等价公式为:
g=ω0ω1(μ0-μ1) 2  (7);
通过遍历T值,获取与T值对应的N0,N1,将与T值对应的N0,N1代入公式(1)—(7)计算获取使类间方差g最大的T值,作为对目标图像进行二值化处理的转换阈值T。
图3示例性的示出了一种对目标图像进行二值化处理后的应用场景示意图。
图3中,目标屏幕为电子屏,电子屏中包括两个标识码,且标识码为二维码,即目标图像包括一个电子屏图像,电子屏图像中包括两个二维码。
S102、对所述标识码进行图像识别,获得标识码信息。
在具体应用中,标识码信息是标识码上显示的,用于计算目标车辆与目标屏幕之间相对位置的信息。基于所设置的标识码的不同,识别得到的标识码信息也可以不同。下面结 合图4-5对本申请提供的目标屏幕图像中的标识码信息进行示例性的说明。
在具体应用中,对预处理后的目标图像进行定位,确定标识码在目标图像中的位置,并根据标识码在目标图像中的位置对标识码进行图像识别,获得标识码信息;标识码信息包括但不限于标识码的实际尺寸(或实际边长)、标识码在目标屏幕中的显示位置信息、目标屏幕的标识以及当前时刻目标屏幕所在位置的路况信息等。
在具体应用中,每个屏幕设定不同的标识,因此可识别标识码信息中包括的目标屏幕的标识,确定与该标识对应的屏幕,作为目标屏幕,并根据目标屏幕中的标识确定目标屏幕的位置信息。
例如,若标识码信息中包括的目标屏幕的标识为ID008,则可确定ID008的屏幕即为目标屏幕,同时可获取ID008的目标屏幕的位置信息。
需要说明的是,可以预设时间间隔更新标识码,以更新标识码上携带的标识码信息,进而实时更新目标屏幕所在位置的路况信息。预设时间间隔根据实际情况进行具体设定,例如,设定预设时间间隔为30S,则可每间隔30s,更新一次标识码。
如图4所示,示例性的提供了一种目标屏幕的示意图;
图4中,目标屏幕为电子屏,电子屏中包括一个二维码,二维码关于电子屏中心对称,二维码的4条边与电子屏的边界之间的距离相同。
在具体应用中,在目标屏幕图像中的标识码为一个二维码的情况下,可根据对二维码进行图像预处理和边界抑制处理,确定该二维码的定位区在目标屏幕图像中的位置,从而确定二维码在目标图像中的具体位置,然后根据二维码在目标图像中的具体位置截取二维码图像,发送二维码图像至二维码解析器,通过二维码解析器对该二维码进行解析识别,获得标识码信息。
在一个实施例中,目标屏幕可包括两个以上的标识码。
在具体应用中,在目标屏幕中包括两个以上的标识码时,显示位置信息还可以包括多个标识码之间的相对位置信息。
如图5所示,示例性的提供了另一种目标屏幕的示意图;
图5中,目标屏幕为电子屏,电子屏中左右排列两个二维码,两个二维码的内容相同,左右两个二维码关于电子屏中心对称,图示右边的二维码以左边的二维码向右旋转90°的方式显示在电子屏中,图5中,每个二维码到电子屏边缘之间的距离a,两个二维码之间间距为2a,也即每个二维码到电子屏边缘之间的距离等于两个二维码之间间距的二分之一。
对应的,图5中每个二维码的标识码信息应包括:该二维码的尺寸(或边长)、该二维码和另一个二维码在电子屏中的显示位置信息为以电子屏的中心呈左右对称,并且该二维 码与电子屏边界之间的距离和另一个二维码与电子屏边界之间的距离相同,该二维码和另一个二维码之间的距离为该二维码与电子屏边缘之间的距离的两倍,以及当前时刻目标屏幕所在位置的路况信息。
S103、根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息。
在具体应用中,目标车辆与目标屏幕之间的相对位置信息包括目标车辆与目标屏幕之间的实际距离和偏转角度。根据目标屏幕的标识确定对应的目标屏幕,根据标识码的实际尺寸(边长)、标识码在目标屏幕图像中的显示位置信息等计算获得目标车辆与目标屏幕之间的实际距离和角度。
在具体应用中,获取预设边在目标屏幕图像中的像边长,预设边在目标屏幕图像中的像边长具体可通过像素个数表示,将预设边在目标屏幕图像中的像边长(像素,px)转换为该预设边在目标屏幕图像中的长度(厘米,cm),并根据该预设边在目标屏幕图像中的长度(厘米,cm)和预设边的实际边长,计算获得目标车辆与电子屏之间的实际距离。
在具体应用中,根据标识码的尺寸(边长)信息、标识码在目标屏幕上的显示位置信息,确定目标屏幕图像的中心点位置;并根据目标图像中心点像素和目标屏幕图像的中心点像素的像长度差,计算目标车辆与目标屏幕之间的偏转角度。其中,目标车辆与目标屏幕之间的偏转角度包括水平偏角和竖直偏角;
或者,可根据标识码在目标屏幕中的显示位置信息确定多个第一线段在目标屏幕图像中的位置,计算多个第一线段在目标屏幕图像中的像长度,根据多个第一线段的实际长度、多个第一线段在目标屏幕图像中的像长度以及预设的转换系数,分别计算多个第一线段与目标车辆之间的距离,根据多个第一线段与目标车辆之间的距离,计算目标车辆与目标屏幕之间的偏转角度。
在本申请实施例中,图像中长度的计量单位可通过像素(px)表示;因此,预设边在目标屏幕图像中的像边长可通过预设边在目标屏幕图像中的像素个数表示,目标图像中心点像素和目标屏幕图像的中心点像素的像长度差可通过目标图像中心点像素和目标屏幕图像的中心点像素的个数差值表示,多个第一线段在目标屏幕图像中的像长度可通过多个第一线段在目标屏幕图像中的像素个数表示;而预设边的实际边长以及第一线段的实际长度等数据的计量单位为厘米(cm),因此,在计算目标车辆与目标屏幕之间的相对位置信息时,需通过像素与厘米之间的预设转换系数对计量单位进行转换,将像素(px)转换为厘米(cm),获得预设边在目标屏幕图像中的边长(厘米,cm),目标图像中心点像素和目标屏幕图像的中心点像素的长度差(厘米,cm),以及第一线段在目标屏幕图像中的长度(厘米,cm)。
在一个实施例中,所述相对位置信息包括所述目标车辆和所述目标屏幕之间的实际距离,所述标识码信息包括所述标识码中的预设边的实际边长;
所述步骤S103,包括:
S1031、确定所述预设边在所述目标屏幕图像中的像边长;
S1032、根据所述预设边的实际边长、所述预设边在所述目标屏幕图像中的像边长和预设的转换系数,计算所述实际距离。
在具体应用中,相对位置信息包括目标车辆和目标屏幕之间的实际距离。其中,预设边在目标屏幕图像中的像边长可通过标识码中的预设边在目标屏幕图像中的像素个数表示,根据像素与厘米之间的预设转换系数对计量单位进行转换,获得预设边在目标屏幕图像中的长度(厘米,cm),根据预设边在目标屏幕图像中的长度(厘米,cm)以及预设边的实际边长,计算目标车辆与目标屏幕之间的实际距离。
其中,预设边可以根据实际情况进行具体设定,例如,标识码是长方形时,设定预设边为标识码的高,对应的,标识码信息中包括的标识码的高的实际边长即为该预设边的实际边长。
例如,以标识码是二维码为例,在实际应用中二维码一般为正方形,因此可设定预设边为二维码的任意一条边;对应的,标识码信息中包括的二维码的实际边长即为预设边的实际边长,获取二维码的任意一条边在目标屏幕图像中的像素个数,进而可根据二维码的实际边长、二维码的任意一条边在目标屏幕图像中的像素个数,计算获得目标车辆与目标屏幕之间的实际距离。
在一个实施例中,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角度;所述标识码信息包括所述标识码中预设的多个第一线段的实际长度;
所述步骤S103,包括:
确定所述多个第一线段在所述目标屏幕图像中的像长度;
根据所述多个第一线段在所述目标屏幕图像中的像长度、所述多个第一线段的实际长度和预设的转换系数,分别计算所述多个第一线段与所述目标车辆之间的距离;
根据所述多个第一线段与所述目标车辆之间的距离,确定所述偏转角度。
在具体应用中,相对位置信息包括目标车辆相对于目标屏幕的偏转角度,偏转角度包括水平偏角和竖直偏角。标识码信息包括所述标识码中预设的多个第一线段的实际长度;其中,第一线段为标识码中的用于测定目标车辆与目标屏幕之间的偏转角度的线段,第一线段在标识码中的位置可根据实际情况进行具体设定,第一线段的实际长度根据其在标识码中的位置改变。
在具体应用中,根据每个第一线段在标识码中的位置,确定每个第一线段的实际长度,计算每个第一线段在目标屏幕图像中像长度;第一线段在目标屏幕图像中的像长度的计量单位通常为像素(px),根据像素与厘米之间的预设的转换系数,将计量单位转换为长度(厘米,cm),获得每个第一线段在目标屏幕图像中的长度(厘米,cm),分别根据每个第一线段在目标屏幕图像中的长度(厘米,cm)以及每个第一线段的实际长度计算每个第一线段与所述目标车辆之间的距离。
具体的,多个第一线段应包括多个水平线段和多个竖直线段。对应的,目标屏幕图像中多个第一线段与目标车辆之间的距离包括水平距离和竖直距离,分别用于计算目标车辆相对于目标屏幕的竖直偏角及目标车辆相对于目标屏幕的水平偏角。
在具体应用中,可通过预设算法对所有水平距离计算,获得目标车辆与目标屏幕之间的竖直偏角,通过预设算法对所有竖直距离计算,获得目标车辆与目标屏幕之间的水平偏角。其中,预设算法包括但不限于Music(Multiple Signal Classification algorithm)算法。
由于目标图像中的线段会产生形变,通过确定标识码中预设的多个第一线段的位置,进而确定多个第一线段的形变程度。可根据多个第一线段的形变程度计算目标车辆与目标屏幕之间的偏转角度,实现通过单目摄像头模拟基于多目摄像头的定位方法进行车辆定位,减小偏转角度的精度误差,且不依赖于基于多个图像的图像匹配算法,且受环境因素影响较低,可以在复杂的情况下实现车辆定位。
例如,若设定第一线段共有8条,分别为4条水平线段和4条竖直线段,且每两条水平线段之间的间距相同,每两条竖直线段之间的间距相同;则可根据标识码边长的像长度,确定标识码中4条等距的水平线段的实际长度,4条等距的竖直线段的实际长度以及每条第一线段的位置信息。
例如,标识码为如二维码的正方形图像,其边长的像素个数为50,则可对应获得每条水平线段之间的间距为10个像素,每条竖直线段之间的间距为10个像素,进而确定每条水平线段以及每条竖直线段在标识码中的位置。
如图6所示,提供了一种目标屏幕图像中的第一线段的示意图。
图6a中,目标屏幕为电子屏,对应的目标屏幕图像为电子屏图像,电子屏图像中包括一个二维码;其中,第一线段为二维码上的4条等距的水平线段和4条等距的竖直线段;
图6b中,目标屏幕为电子屏,对应的目标屏幕图像为电子屏图像,电子屏图像中包括两个相同的二维码;其中,第一线段为每个二维码上2条等距的水平线段和2条等距的竖直线段。
在一个实施例中,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角 度,所述标识码信息包括所述标识码在所述目标屏幕中的显示位置信息;
所述步骤S103,包括:
根据所述显示位置信息确定所述目标屏幕图像的中心点;
计算所述目标图像的中心点到所述目标屏幕图像的中心点之间的像长度差;
根据所述像长度差确定所述偏转角度。
在具体应用中,相对位置信息包括目标车辆相对于目标屏幕的偏转角度。确定目标图像的中心点,并根据标识码在目标图像中的显示位置信息以及标识码的尺寸(边长),可确定目标屏幕图像的中心点,然后计算目标图像的中心点和目标屏幕图像的中心点之间的像长度差;由于图像中长度的计量单位为像素(px),可根据像素到厘米之间的预设转换系数,将计量单位转换为长度(厘米,cm),获得目标图像的中心点和目标屏幕图像的中心点之间的长度差值(厘米,cm),并根据目标图像的中心点和目标屏幕图像的中心点之间的长度差值(厘米,cm)、目标车辆到目标屏幕之间的实际距离,计算获得目标车辆相对于目标屏幕的偏转角度。
在具体应用中,目标图像的中心点和目标屏幕图像的中心点之间的像长度差可通过像素个数差值表示;目标图像的中心点和目标屏幕图像的中心点之间的像素个数差值包括水平像素个数差值和竖直像素个数差值,对应的,可根据目标图像的中心点和目标屏幕图像的中心点之间的水平像素个数差值以及目标车辆到目标屏幕之间的实际距离,计算获得目标车辆相对于目标屏幕的水平偏角;根据目标图像的中心点和目标屏幕图像的中心点之间的竖直像素个数差值以及目标车辆到目标屏幕之间的实际距离,计算获得目标车辆相对于目标屏幕的竖直偏角。
以目标屏幕为电子屏,目标屏幕图像为电子屏图像,电子屏图像包括两个标识码,标识码是二维码为例。如图7至图15所示,提供一种计算目标车辆与目标屏幕之间的相对位置信息的应用场景示意图;
其中,图7-9为对预处理后的目标图像进行边界抑制处理的应用场景示意图。
在具体应用中,边界抑制操作包括:以图像中的任一个像素点的8个像素点为边缘像素点(需要说明的是,在图像边界处的中心像素点的边缘像素点会少于8个),比较每一个像素点和该像素点的边缘像素点的灰度值,若任一个像素点的边缘像素点的灰度值均为0,则认为该像素点是与图像边界相邻的像素点,将该像素点的灰度值转换为0。
在正常情况下,每个二维码具有三个定位区域,每个定位区域均由一个黑色边框、一个白色边框和一个方块构成。对二维码进行边界抑制处理后,可获得如图7所示的图像,图像中留下很多由一个黑色边框、一个白色边框和一个方块嵌套显示的像素区域(如图8 所示)以及其他像素区域,将图9中其他像素区域的灰度值转换为0,获得如图9所示的图像。其中,图7中的由一个黑色边框、一个白色边框和一个方块嵌套显示的像素区域中包含了二维码的定位区域。
图10-12为确定二维码定位区的应用场景示意图。
在具体应用中,在标识码为二维码时,确定二维码的定位区,包括:
标记标识码上所有满足预设标记条件的区域;
遍历所有被标记区域,计算每一个被标记区域的质心位置;
对质心位置进行检测,获取所有满足预设定位条件的质心所在的被标记区域,确定标识码的定位区域;
识别标识码定位区域的定位区域,获得标识码的信息。
在具体应用中,可根据标识码的类型的不同,对应设定预设标记条件和预设定位条件。其中,预设定位条件为预先设定的判定标识码中任一个像素区域是否为标识码的定位区域的识别条件。
在标识码为二维码时,设定预设标记条件为多个黑色边框和一个黑色方块嵌套显示的像素区域。然后对满足预设标记条件的被标记区域进行填充处理(如图10示,将被标记区域的像素灰度值转换为0),遍历计算每一个被标记区域的质心位置;并根据标识码的类型,确定对应的预设定位条件,对质心位置进行检测,获取所有满足预设定位条件的质心所在的被标记区域,确定标识码的定位区域。
图11为二维码的定位区的示意图。
图11中,以二维码定位区域内白色色块(即像素值为1的部分)为波峰,以二维码定位区域内黑色色块(即像素值为0的部分)为波谷。预先确定一条以被填充区域的质心位置为中心,与二维码图像的边缘平行的竖直线段。对应的,可以确定每个二维码的波峰与波谷的相对宽度可以根据竖直线段以竖直方向经过质心位置时,像素值为0、1的像素点个数来计算。因此,对应的预设定位条件可以设为波峰个数为3,波谷的个数为2,并且波峰的和波谷的宽度比例满足预设比例阈值的像素区域是二维码的定位区域。
可以理解的是,可将波峰的个数和/或波谷的个数不满足预设个数的像素区域的灰度值转换为0。
具体的,可通过欧几里德距离来衡量计算获得像素区域内波峰、波谷比例的相似度,并将相似度作为预设比例阈值。具体算法如下:
设定波峰和波谷之间比例为e1:e2:e3:e4:e5,通过欧氏距离计算波峰、波谷比例的相似度XSD的公式为:
Figure PCTCN2020096844-appb-000003
通过实验模拟可以知道,当XSD的值小于0.8时,检测二维码定位区域的结果准确度较高。
因此,可设定预设比例阈值为0.8,即在某像素区域波峰个数为3,波谷的个数为2的情况下,若该区域像素区域的波峰和波谷比例小于0.8,则判定该区域像素区域的为二维码定位区域;若该区域像素区域的波峰和波谷比例大于0.8,则判定该区域像素区域的不是二维码定位区域。
图12中,包括一个判定为二维码定位区域的像素区域。
在具体应用中,确定每个二维码的定位区域后,根据两个二维码的排列方式可以知道,图12中,横坐标较小的二维码三个定位区域为左侧二维码的定位区域,横坐标较大的三个二维码定位区域为右侧二维码的定位区域,对每个二维码的所有定位区域进行识别,获得每个二维码的标识码信息。
图13为计算目标车辆与目标屏幕之间的实际距离的应用场景示意图;
由步骤S103的内容可知,目标图像中的长度计量单位为像素(px),因此可通过像素与厘米之间的预设转换系数,将长度计量单位转换为厘米(cm),获得标识码在目标图像中边的长度(厘米,cm);然后对标识码的实际边长和标识码在目标图像中边的长度(厘米,cm)进行计算,获得目标车辆与目标屏幕之间的实际距离。
图13中,摄像机的焦距以F表示;目标车辆与目标屏幕之间的实际距离以Y表示;以BC表示二维码的实际边长值,DE表示二维码在目标屏幕图像上的像素个数。
因此,可以预先获取摄像机的拍摄像素密度,摄像机的像素密度PPI、长度CM(计量单位为cm)和像素个数PX之间的转换关系如下:
Figure PCTCN2020096844-appb-000004
由于PPI是一个固定系数,可事先测定或者直接由摄像机说明书中读取,因此可以将二维码在目标屏幕图像上的边的像素个数DE作为PX,代入公式(8),可以将DE的计量单位从像素转换为厘米;
根据图13可知,ΔABC和ΔADE可以构成一对相似三角形。其中,Y表示ΔABC以BC为底的高。因此,可以理解Y=AC。
对应的,可以获得目标车辆与目标屏幕之间的实际距离Y和摄像机的焦距F存在如下式的比例关系:
Figure PCTCN2020096844-appb-000005
也即:
Figure PCTCN2020096844-appb-000006
根据公式(10)计算,可获得目标车辆与目标屏幕之间的实际距离Y。
在一种情况下中,常用的摄像头标注的镜头焦距不等于实际拍摄的焦距,并且通常在拍摄图像后,摄像头可能会对图片做一些预处理(例如,去噪处理),使得获取到的摄像机的焦距F值与实际拍摄的焦距具有一定偏差。
可选的,针对上述情况,本申请实施例提供另一种计算目标车辆与目标屏幕之间的实际距离实际距离的方式,可避免因摄像头参数不准确带来的定位精度降低的问题:
二维码的实际边长以X表示,预先获取在任一车辆与目标屏幕之间的距离为Y2时,对应的二维码在目标屏幕图像中边长的像素个数X2,同时获取二维码像素到厘米的转换系数PPI。
根据像素与厘米之间的转换关系,可以获得:
Figure PCTCN2020096844-appb-000007
Figure PCTCN2020096844-appb-000008
已知目标车辆与电子屏之间的距离为Y2和对应的二维码边长的像素个数X2,因此可以获得:
Figure PCTCN2020096844-appb-000009
对公式(13)进行转换,进而获得Y的计算公式为:
Figure PCTCN2020096844-appb-000010
其中,Y即为目标屏幕到摄像头之间的直线距离,也即目标车辆与目标屏幕之间的实际距离。
图14为计算目标车辆与目标屏幕之间的偏转角度的应用场景示意图。
图14中,摄像头到目标屏幕中心之间的水平距离以DX表示,摄像头到目标屏幕中心之间的竖直距离以DY表示。由于两个二维码关于目标屏幕中心对称,且每个二维码到电子屏边缘之间的距离等于两个二维码之间间距的二分之一,则可确定目标图像中两个二维码的中间点即为目标屏幕图像中心点;目标图像中心点到目标屏幕图像中心点的水平像素个数差值以C1表示,竖直像素个数差值以C2表示,目标图像上单个二维码的宽的像素个数为PX,单个二维码高的像素个数为PY。二维码实际边长以L表示,则水平距离DX和 竖直距离DY可通过下式计算获得:
Figure PCTCN2020096844-appb-000011
Figure PCTCN2020096844-appb-000012
目标车辆与目标屏幕之间的实际距离为Y,则目标车辆与目标屏幕之间的水平偏角计算公式如下:
Figure PCTCN2020096844-appb-000013
目标车辆与目标屏幕之间的竖直偏角计算公式如下:
Figure PCTCN2020096844-appb-000014
如图15所示,提供了另一种计算目标车辆与目标屏幕之间的偏转角度的应用场景示意图。
图15中,根据二维码的实际边长、二维码在目标屏幕图像上的具体位置信息,确定每个二维码上的两条等距的预设水平线及两条等距的预设竖直线的位置;可以理解的是,预设水平线与预设竖直线的实际长度为二维码的实际边长。
根据二维码的实际边长,二维码在目标屏幕图像中边长的像素个数,计算获得每条预设水平线与目标车辆之间的水平距离,以及每条预设竖直线与目标车辆之间的竖直距离。
需要说明的是,由于在摄像头水平偏转时,图像中在竖直方向上的线段形变较大,在摄像头竖直偏转时,图像中在水平方向上的线段形变较大,因此,可将预设竖直线与目标车辆之间的竖直距离用于测水平偏角,预设水平线与目标车辆之间的水平距离用于测量竖直偏角。
通过Music算法计算水平偏角的步骤如下:以两条预设竖直线之间的间距为d,以间距d的矩阵构造Music算法的入射信号(也即输入数据)
Figure PCTCN2020096844-appb-000015
其中,中间变量Z1,Z2,Z3,Z4分别为:
Z1=0;
Figure PCTCN2020096844-appb-000016
Figure PCTCN2020096844-appb-000017
Figure PCTCN2020096844-appb-000018
其中,Y1表示根据第一条竖直线段(如目标屏幕图像中左边二维码的左边缘)估计出的目标车辆与目标屏幕之间的距离值;Y2表示根据第二条竖直线段估计出的目标车辆与目标屏幕之间的距离值;Y3表示根据第三条竖直线段估计出的目标车辆与目标屏幕之间的距离值;Y4表示根据第四条竖直线段(如目标屏幕图像中右边二维码右边缘)估计出的目标车辆与目标屏幕之间的距离值。
计算输入信号的协方差矩阵如下:
R S(i)=S(i)S H(i)  (19);
其中,H表示矩阵的共轭转置;
获得的协方差矩阵R x可以改写为:
R S(i)=ARA H2I  (20);
其中,A为方向响应向量;R为信号相关矩阵,由输入信号S(i)中提取得到;σ 2为噪声功率,I为单位矩阵;
对R x进行特征分解,γ为分解得到的特征值,υ(θ)为与特征值γ对应的特征向量。根据特征值γ的大小进行排序,以最大特征值对应的特征向量υ(θ)为信号部分空间,以除最大特征值以外的其他3个特征值和对应的特征向量作为噪声部分空间,获得噪声矩阵E n
A Hυ i(θ)=0,i=2,3,4  (21);
E n=[υ 2(θ),υ 3(θ),υ 4(θ)]   (22);
计算获得的水平偏角P为:
Figure PCTCN2020096844-appb-000019
其中,a表示信号向量(由S(i)中提取得到)。
在具体应用中,摄像头偏转一定角度后,图像会产生一定程度的形变。并且,摄像头偏转不同的角度,对应产生的图像的形变程度不同。因此,可根据图像上的形变程度可计算出摄像头偏转的角度信息。
因此,基于屏幕光通信,实现将目标图像上的多条线段的形变程度转化为入射信号,作为Music算法的输入值,以计算出摄像头相对于目标屏幕中心的偏转角度,作为目标车辆与目标屏幕之间的角度。
在实际应用中,由于二维码在不同位置上发生的形变程度不同,Music算法计算出的偏转角度误差不同。
根据实验证明,在标识码上的多条第一线段形变程度的差值最大时,Music算法计算得到的偏转角度误差最小。
因此,需要计算使得Music算法计算的偏转角度误差最小的摄像头偏转角度,计算方法如下:
设定摄像机的拍摄的转换矩阵为:
K=[α -N,α 1-N,α 2-N,...α 0,...α N-2N-1N]  (24);
由于摄像头在拍摄图像时,其产生的畸变程度关于中心左右对称,因此可以获得:
α -N=α N1-N=α N-1>...>α 0  (25);
其中,K为摄像头的畸变矩阵。一般情况下,物体的实际位置和在图像中的位置不太一样。矩阵K表述的是物体实际位置和在图像里位置的转换关系。图像是一个二维的矩阵,对应的,K也是一个二维矩阵。K中的α是一个列向量,α -N表示最左边的列向量,可以理解的是,α 1-N表示左数第二列向量,α 2-N表示左数第三列向量等。
假设二维码图像上的第一线段分别位于图像上的p处和q处,对应的可以计算出上述两条第一线段与目标车辆之间的距离为D p和D q,两条第一线段像素个数分别为P p和P q,二维码的实际边长以L表示,相机焦距以F表示,对公式(9)进行转换,可以获得:
Figure PCTCN2020096844-appb-000020
Figure PCTCN2020096844-appb-000021
Figure PCTCN2020096844-appb-000022
Figure PCTCN2020096844-appb-000023
以两条第一线段之间的像素个数差值为W,进而获得:
W=P pα p-P qα q  (30);
可以获得在q=0(即点q处于目标屏幕图像的中心点),且图像上p与q之间的距离最大时,两条第一线段之间的像素个数差值最大。因此,在实际拍摄的过程中,控制摄像头偏转拍 摄时,应尽量使得目标屏幕图像中的左边二维码的右侧边靠近目标屏幕图像的中心点,使得目标屏幕图像中的右边二维码的左侧边靠近目标屏幕图像的中心点,以使Music算法计算得到的偏转角度误差最小。
通过将线段在图像上的形变程度转换为入射信号,作为Music算法的输入值,可实现基于屏幕光通信通过Music算法计算获得摄像头相对于目标屏幕中心的偏转角度,进而获得目标车辆与目标屏幕之间的角度,提高了计算的效率和准确率。
在一个实施例中,步骤S103之后,还包括:
S104、获取其他车辆与所述目标电子之间的第二相对位置信息。
在具体应用中,获取其他车辆发送的包含目标屏幕图像的目标图像,通过上述步骤S101至S103进行计算,获得其他车辆与目标屏幕之间的第二相对位置信息;可以理解的是,其他车辆与目标屏幕之间的第二相对位置信息包括其他车辆与目标屏幕之间的距离和偏转角度。
S105、根据所述相对位置信息和所述第二相对位置信息确定所述目标车辆与所述其他车辆之间的第三相对位置信息。
在具体应用中,目标车辆与其他车辆之间的第三相对位置信息包括目标车辆与其他车辆之间的距离和角度。可根据目标车辆与目标屏幕之间的相对位置信息以及其他车辆与目标屏幕之间的第二相对位置信息,计算获得目标车辆与其他车辆之间的距离和角度,确定目标车辆与其他车辆之间的第三相对位置信息。
在一个实施例中,所述目标屏幕图像包括至少两个相同的标识码。
在具体应用中,通过设定两个以上的标识码,可以实现通过单目摄像头拍摄一个目标图像时,识别获得多个标识码的标识码信息,可对至少两个标识码的标识码信息进行分析计算,模拟基于双目/多目摄像头的定位方法进行车辆定位,但在计算过程中可不依赖于基于多个图像的图像匹配算法,减小了设备成本和计算量,扩大了高精度测距的范围,并且,基于多个标识码与车辆间的通信实现车辆定位受环境因素的影响小。
在一个实施例中,所述标识码信息还包括所述目标屏幕所在区域的道路交通信息,所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息之后,所述方法还包括:
根据所述相对位置信息和所述道路交通信息,生成与目标车辆对应的行车指令,所述行车指令包括行车速度和行车方向;
发送所述行车指令至所述目标车辆,以控制所述目标车辆根据所述行车指令行驶。
在具体应用中,获取目标屏幕所在区域的道路交通信息,根据相对位置信息和道路交 通信息分析确定目标车辆所在场所(道路)的路况信息,并生成与目标车辆对应的行车指令,将行车指令发送至目标车辆,控制目标车辆根据行车指令行驶。
本实施例通过对目标车辆发送的目标屏幕图像的目标图像进行处理,对目标屏幕图像中的标识码进行识别获得标识码信息,根据标识码信息计算目标车辆与目标屏幕之间的相对位置信息,基于目标屏幕与车辆之间的屏幕光通信实现大范围高精度的车辆定位操作,设备成本低,减小了环境因素对测距精度的影响,提高了车辆定位的稳定性。
图16示出了本申请提供的基于屏幕光通信的车辆定位方法的示意性流程图,作为示例而非限定,该方法可以应用于车辆中。
S201、获取图像;
S202、在识别到所述图像包括目标屏幕图像时,判定所述图像为目标图像;
S203、发送所述目标图像至服务器,以使所述服务器根据所述目标图像确定车辆与目标屏幕之间的相对位置信息。
在具体应用中,实时控制摄像头拍摄图像,并对图像进行分析识别,在识别到图像中包括目标屏幕图像时,判定该图像为目标图像,将目标图像发送至服务器,以使服务器根据对目标图像中的标识码进行图像识别,获得标识码信息,然后根据标识码信息确定车辆与目标屏幕之间的相对位置信息。
本实施例通过实时获取图像,并在识别到图像包括目标屏幕图像时,将该图像作为目标图像发送至服务器,以使服务器根据目标图像确定车辆与目标屏幕之间的相对位置信息,从而基于目标屏幕与车辆之间的屏幕光通信实现大范围高精度的车辆定位操作,设备成本低,同时提高了对车辆进行高精度定位的范围和稳定性。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的基于屏幕光通信的车辆定位方法,图17示出了本申请实施例提供的基于屏幕光通信的车辆定位装置100的结构框图,所述基于屏幕光通信的车辆定位装置100应用于服务器中,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图17,该基于屏幕光通信的车辆定位装置100包括:
接收模块101,用于接收目标车辆发送的目标图像,所述目标图像包括目标屏幕图像;所述目标屏幕图像包括至少一个标识码;
识别模块102,用于对所述标识码进行图像识别,获得标识码信息;
确定模块103,用于根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息;
在一个实施例中,所述装置100,还包括:
获取模块104,用于获取其他车辆与所述目标电子之间的第二相对位置信息;
第二确定模块105,用于根据所述相对位置信息和所述第二相对位置信息确定所述目标车辆与所述其他车辆之间的第三相对位置信息。
在一个实施例中,所述相对位置信息包括所述目标车辆和所述目标屏幕之间的实际距离,所述标识码信息包括所述标识码中的预设边的实际边长;
所述确定模块103,包括:
第一确定单元1031,用于确定所述预设边在所述目标屏幕图像中的像边长;
第一计算单元1032,用于根据所述预设边的实际边长、所述预设边在所述目标屏幕图像中的像边长和预设的转换系数,计算所述实际距离。
在一个实施例中,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角度,所述标识码信息包括所述标识码在所述目标屏幕中的显示位置信息,所述确定模块103,包括:
第二确定单元1033,用于根据所述显示位置信息确定所述目标屏幕图像的中心点;
第二计算单元1034,用于计算所述目标图像的中心点到所述目标屏幕图像的中心点之间的像长度差;
第三确定单元1035,用于根据所述像长度差确定所述偏转角度。
在一个实施例中,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角度;所述标识码信息包括所述标识码中预设的多个第一线段的实际长度;
所述确定模块103,包括:
第四确定单元1036,用于确定所述多个第一线段在所述目标屏幕图像中的像长度;
第三计算单元1037,用于根据所述多个第一线段在所述目标屏幕图像中的像长度、所述多个第一线段的实际长度和预设的转换系数,分别计算所述多个第一线段与所述目标车辆之间的距离;
第五确定单元1038,用于根据所述多个第一线段与所述目标车辆之间的距离,确定所述偏转角度。
在一个实施例中,所述标识码信息还包括所述目标屏幕所在区域的道路交通信息,所述装置100,还包括:
生成模块,用于根据所述相对位置信息和所述道路交通信息,生成与目标车辆对应的行车指令,所述行车指令包括行车速度和行车方向;
发送模块,用于发送所述行车指令至所述目标车辆,以控制所述目标车辆根据所述行 车指令行驶。
本实施例通过对目标车辆发送的目标屏幕图像的目标图像进行处理,对目标屏幕图像中的标识码进行识别获得标识码信息,根据标识码信息计算目标车辆与目标屏幕之间的相对位置信息,基于目标屏幕与车辆之间的屏幕光通信实现大范围高精度的车辆定位操作,设备成本低,减小了环境因素对测距精度的影响,提高了车辆定位的稳定性。
对应于上文实施例所述的基于屏幕光通信的车辆定位方法,图18示出了本申请实施例提供的基于屏幕光通信的车辆定位装置200的结构框图,所述基于屏幕光通信的车辆定位装置200应用于车辆中,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图18,该基于屏幕光通信的车辆定位装置200包括:
获取模块201,用于获取图像;
判断模块202,用于在识别到所述图像包括目标屏幕图像时,判定所述图像为目标图像;
发送模块203,用于发送所述目标图像至服务器,以使所述服务器根据所述目标图像确定车辆与目标屏幕之间的相对位置信息。
本实施例通过实时获取图像,并在识别到图像包括目标屏幕图像时,将该图像作为目标图像发送至服务器,以使服务器根据目标图像确定车辆与目标屏幕之间的相对位置信息,从而基于目标屏幕与车辆之间的屏幕光通信实现大范围高精度的车辆定位操作,设备成本低,同时提高了对车辆进行高精度定位的范围和稳定性。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
图19为本申请一实施例提供的服务器的结构示意图。如图19所示,该实施例的服务器19包括:至少一个处理器190(图19中仅示出一个)处理器、存储器191以及存储在所述存储器191中并可在所述至少一个处理器190上运行的计算机程序192,所述处理器190执行所述计算机程序192时实现上述任意各个基于屏幕光通信的车辆定位方法实施例中的步骤。
所述服务器19可以是云端服务器等计算设备。该服务器可包括,但不仅限于,处理器190、存储器191。本领域技术人员可以理解,图19仅仅是服务器19的举例,并不构成对服务器19的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所称处理器190可以是中央处理单元(Central Processing Unit,CPU),该处理器190还 可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器191在一些实施例中可以是所述服务器19的内部存储单元,例如服务器19的硬盘或内存。所述存储器191在另一些实施例中也可以是所述服务器19的外部存储设备,例如所述服务器19上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字卡(Secure Digital,SD),闪存卡(Flash Card)等。进一步地,所述存储器191还可以既包括所述服务器19的内部存储单元也包括外部存储设备。所述存储器191用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器191还可以用于暂时地存储已经输出或者将要输出的数据。
本申请实施例还提供了一种服务器,该服务器包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
以上仅为本申请的可选实施例而已,并不用于限制本申请。对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (7)

  1. 一种基于屏幕光通信的车辆定位方法,其特征在于,应用于服务器,包括:
    接收目标车辆发送的目标图像,所述目标图像包括目标屏幕图像;所述目标屏幕图像包括至少一个标识码;
    对所述标识码进行图像识别,获得标识码信息;
    根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息。
  2. 根据权利要求1所述的方法,其特征在于,所述相对位置信息包括所述目标车辆和所述目标屏幕之间的实际距离,所述标识码信息包括所述标识码中的预设边的实际边长;
    所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息,包括:
    确定所述预设边在所述目标屏幕图像中的像边长;
    根据所述预设边的实际边长、所述预设边在所述目标屏幕图像中的像边长和预设的转换系数,计算所述实际距离。
  3. 根据权利要求1所述的方法,其特征在于,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角度,所述标识码信息包括所述标识码在所述目标屏幕中的显示位置信息,所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息,包括:
    根据所述显示位置信息确定所述目标屏幕图像的中心点;
    计算所述目标图像的中心点到所述目标屏幕图像的中心点之间的像长度差;
    根据所述像长度差确定所述偏转角度。
  4. 根据权利要求1所述的方法,其特征在于,所述相对位置信息包括所述目标车辆相对于所述目标屏幕的偏转角度;所述标识码信息包括所述标识码中预设的多个第一线段的实际长度;
    所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息,包括:
    确定所述多个第一线段在所述目标屏幕图像中的像长度;
    根据所述多个第一线段在所述目标屏幕图像中的像长度、所述多个第一线段的实际长度和预设的转换系数,分别计算所述多个第一线段与所述目标车辆之间的距离;
    根据所述多个第一线段与所述目标车辆之间的距离,确定所述偏转角度。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述标识码信息还包括所述目标屏幕所在区域的道路交通信息,所述根据所述标识码信息确定所述目标车辆与目标屏幕之间的相对位置信息之后,所述方法还包括:
    根据所述相对位置信息和所述道路交通信息,生成与目标车辆对应的行车指令,所述行车指令包括行车速度和行车方向;
    发送所述行车指令至所述目标车辆,以控制所述目标车辆根据所述行车指令行驶。
  6. 一种基于屏幕光通信的车辆定位方法,其特征在于,应用于车辆,包括:
    获取图像;
    在识别到所述图像包括目标屏幕图像时,判定所述图像为目标图像;
    发送所述目标图像至服务器,以使所述服务器根据所述目标图像确定车辆与目标屏幕之间的相对位置信息。
  7. 一种服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至5任一项,或6所述的方法。
PCT/CN2020/096844 2020-06-18 2020-06-18 一种基于屏幕光通信的车辆定位方法、装置及服务器 WO2021253333A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096844 WO2021253333A1 (zh) 2020-06-18 2020-06-18 一种基于屏幕光通信的车辆定位方法、装置及服务器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096844 WO2021253333A1 (zh) 2020-06-18 2020-06-18 一种基于屏幕光通信的车辆定位方法、装置及服务器

Publications (1)

Publication Number Publication Date
WO2021253333A1 true WO2021253333A1 (zh) 2021-12-23

Family

ID=79269084

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096844 WO2021253333A1 (zh) 2020-06-18 2020-06-18 一种基于屏幕光通信的车辆定位方法、装置及服务器

Country Status (1)

Country Link
WO (1) WO2021253333A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637330A (zh) * 2015-02-15 2015-05-20 国家电网公司 基于视频二维码的车载导航通讯系统及防超速方法
CN107871395A (zh) * 2016-09-23 2018-04-03 孙少东 智能交通避拥堵系统
CN108663054A (zh) * 2017-03-28 2018-10-16 茹景阳 一种车辆定位的方法及其装置
CN108810811A (zh) * 2018-08-12 2018-11-13 苏州鑫丰恒富科技有限公司 一种大型车库wifi指纹库的创建和更新系统及其方法
KR102009352B1 (ko) * 2018-03-21 2019-10-21 주식회사 메이플테크 IoT 디바이스를 사용한 식물 생장 조건에 따른 배양 용기로 구성된 시스템
CN110515464A (zh) * 2019-08-28 2019-11-29 百度在线网络技术(北京)有限公司 Ar显示方法、装置、车辆和存储介质
CN110675627A (zh) * 2019-09-30 2020-01-10 山东科技大学 一种基于二维码识别的交通信息获取方法与系统
CN110992723A (zh) * 2019-12-27 2020-04-10 魏贞民 一种无人驾驶交通导航信号设备和其管理系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637330A (zh) * 2015-02-15 2015-05-20 国家电网公司 基于视频二维码的车载导航通讯系统及防超速方法
CN107871395A (zh) * 2016-09-23 2018-04-03 孙少东 智能交通避拥堵系统
CN108663054A (zh) * 2017-03-28 2018-10-16 茹景阳 一种车辆定位的方法及其装置
KR102009352B1 (ko) * 2018-03-21 2019-10-21 주식회사 메이플테크 IoT 디바이스를 사용한 식물 생장 조건에 따른 배양 용기로 구성된 시스템
CN108810811A (zh) * 2018-08-12 2018-11-13 苏州鑫丰恒富科技有限公司 一种大型车库wifi指纹库的创建和更新系统及其方法
CN110515464A (zh) * 2019-08-28 2019-11-29 百度在线网络技术(北京)有限公司 Ar显示方法、装置、车辆和存储介质
CN110675627A (zh) * 2019-09-30 2020-01-10 山东科技大学 一种基于二维码识别的交通信息获取方法与系统
CN110992723A (zh) * 2019-12-27 2020-04-10 魏贞民 一种无人驾驶交通导航信号设备和其管理系统

Similar Documents

Publication Publication Date Title
US10964054B2 (en) Method and device for positioning
CN110322500B (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
US20180189577A1 (en) Systems and methods for lane-marker detection
WO2020108311A1 (zh) 目标对象3d检测方法、装置、介质及设备
US11694445B2 (en) Obstacle three-dimensional position acquisition method and apparatus for roadside computing device
US11783507B2 (en) Camera calibration apparatus and operating method
WO2022179566A1 (zh) 外参标定方法、装置、电子设备及存储介质
CN113762003B (zh) 一种目标对象的检测方法、装置、设备和存储介质
CN112947419B (zh) 避障方法、装置及设备
CN109828250B (zh) 一种雷达标定方法、标定装置及终端设备
KR101772438B1 (ko) 도로 표지판 인식 시스템에서 막대형 신호를 검출하는 장치 및 방법
US10679090B2 (en) Method for estimating 6-DOF relative displacement using vision-based localization and apparatus therefor
CN114399675A (zh) 一种基于机器视觉与激光雷达融合的目标检测方法和装置
CN113034586B (zh) 道路倾角检测方法和检测系统
CN111862208B (zh) 一种基于屏幕光通信的车辆定位方法、装置及服务器
CN114662600A (zh) 一种车道线的检测方法、装置和存储介质
WO2021253333A1 (zh) 一种基于屏幕光通信的车辆定位方法、装置及服务器
CN113112551B (zh) 相机参数的确定方法、装置、路侧设备和云控平台
US20160379087A1 (en) Method for determining a similarity value between a first image and a second image
CN113763457B (zh) 落差地形的标定方法、装置、电子设备和存储介质
CN114638947A (zh) 数据标注方法、装置、电子设备及存储介质
CN104236518B (zh) 一种基于光学成像与模式识别的天线主波束指向探测方法
CN117677862A (zh) 一种伪像点识别方法、终端设备及计算机可读存储介质
JP7064400B2 (ja) 物体検知装置
Fei et al. Obstacle Detection for Agricultural Machinery Vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20941451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20941451

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20941451

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED D09/08/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20941451

Country of ref document: EP

Kind code of ref document: A1