CN112305499B - Method and device for positioning according to light source - Google Patents

Method and device for positioning according to light source Download PDF

Info

Publication number
CN112305499B
CN112305499B CN201910712800.9A CN201910712800A CN112305499B CN 112305499 B CN112305499 B CN 112305499B CN 201910712800 A CN201910712800 A CN 201910712800A CN 112305499 B CN112305499 B CN 112305499B
Authority
CN
China
Prior art keywords
vehicle
alignment
light sources
mounted terminal
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910712800.9A
Other languages
Chinese (zh)
Other versions
CN112305499A (en
Inventor
殷佳欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN201910712800.9A priority Critical patent/CN112305499B/en
Publication of CN112305499A publication Critical patent/CN112305499A/en
Application granted granted Critical
Publication of CN112305499B publication Critical patent/CN112305499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for positioning according to a light source. A first vehicle identifying an alignment identifier within an image acquired by the first vehicle, the alignment identifier comprising one or more alignment light sources; the first vehicle determining coordinate information of one or more illumination sources within the image based on the alignment identifier; and the first vehicle determining a location of the first vehicle based on the coordinate information of the one or more illumination sources within the image. Because the characteristics of each alignment mark are unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment mark, and high-precision positioning in a shielding environment can be realized.

Description

Method and device for positioning according to light source
Technical Field
The application relates to the field of intelligent driving, in particular to a method and a device for positioning according to a light source.
Background
In the running process of the automobile, the attention to the surrounding environment is required to be continuously kept so as to make corresponding decisions and adjust the driving behavior to cope with the environmental change. Manually driven vehicles require the driver to maintain attention to the surrounding environment. In the automatic driving stage, the tasks focusing on the surrounding environment are transferred to the vehicle-mounted computer to finish. The vehicle-mounted computer detects the surrounding environment by means of a vehicle-mounted sensor, such as a laser radar, a camera, an ultrasonic radar, a millimeter wave radar and the like. However, these sensors have limitations such as limited linear detection distance, inability to sense road conditions with shielding, reduced sensing accuracy in severe weather environments, and the like. Therefore, as shown in the cellular internet of vehicles schematic diagram of fig. 1, auxiliary facilities on the road are required to realize detection and notification of the environment through various means of internet of vehicles (vehicle to everything, V2X) communication, so as to assist the safer driving of the vehicle. Wherein the V2X communication means includes: vehicle-to-network communications (vehicle to network, V2N), vehicle-to-road infrastructure communications (vehicle to infratructure, V2I), vehicle-to-pedestrian communications (vehicle to pedestrian, V2P), vehicle-to-vehicle communications (vehicle to vehicle, V2V), and the like. Wherein the road infrastructure comprises traffic lights and the like. V2V communications include vehicle-to-pedestrian, vehicle-to-non-automotive communications. The various V2X communications form a cellular Internet of vehicles (cellular vehicle to everything, C-V2X).
The auxiliary driving and automatic driving of the automobile need frequent interaction with surrounding environment in the driving process, such as map navigation, road coordination, vehicle-vehicle coordination and other scenes. The precondition that these scenarios can be performed normally is that the vehicle needs to have a sufficient knowledge of its own position. The position of the road event is superimposed on the coordinate system of the high-precision map and is correlated with the coordinate information in the information sent by the surrounding vehicles, so that the relationship between the coordinates of the road event sent by the road side and the position of the road event can be determined.
In open space or road, the vehicle can receive satellite signals, and the general positioning method is a method of global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS) satellite positioning and real-time dynamic (real-TIME KINEMATIC, RTK) difference, so that the centimeter-level positioning precision can be achieved. However, in a shielded environment, such as a tunnel, a parking lot, a logistics warehouse, an indoor bus station, etc., the vehicle cannot receive the GNSS signal and cannot be positioned.
As shown in an indoor V2X communication schematic diagram in fig. 2, when a vehicle runs in a tunnel, V2X information sent by other surrounding vehicles carries its own coordinates; the roadside cameras collect road traffic events and send messages to vehicles, wherein the road traffic events carry the position coordinates of the occurrence of the events. The vehicle is compared with the position of the vehicle through the positioning information in the information, and the vehicle can finish the application under the scenes of automatic braking (autonomous emergency braking, AEB), front collision warning (forwardcollision warning, FCW), lane departure assistance (LANEKEEPING ASSISTANT, LKA), cooperative cruising (cooperativeadaptive cruise control, C-ACC), road merging (lanemerge) and the like. In which case vehicle positioning techniques in a shielded environment are required.
General positioning methods in shielding environments include Bluetooth, wi-Fi, ultra wideband (ultrawide band, UWB) technology and the like, but are difficult to be adopted by industries due to high cost, insufficient precision, difficulty in adapting to high-speed movement of vehicles or the need of installing special hardware on the vehicles and the like.
Therefore, how to realize vehicle positioning in a high-precision shielding environment is a problem to be solved.
Disclosure of Invention
The application provides a method and a device for positioning according to a light source, which are used for realizing vehicle positioning under a high-precision shielding environment.
In a first aspect, there is provided a method of positioning according to a light source, the method comprising: the method comprises the steps that a first terminal identifies an alignment mark in an image acquired by the first terminal, wherein the alignment mark comprises one or more alignment light sources; the first terminal determines coordinate information of one or more illumination light sources in the image according to the alignment mark; and the first terminal determines the position of the first terminal according to the coordinate information of one or more illumination light sources in the image. In this aspect, since the characteristic of each alignment mark is unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment mark, and high-precision positioning in a shielded environment can be achieved.
In one implementation, the first terminal identifies an alignment identifier within the image, including: the first terminal identifies the alignment mark according to the characteristics of the alignment mark; wherein the alignment identifier comprises one or more of the following features: the arrangement mode of the alignment light sources, the color of the alignment light sources and the brightness of the alignment light sources. In this implementation, the coordinate information of the alignment light sources and the characteristics of the alignment marks may be used to uniquely characterize one alignment mark, so that the coordinate information of the illumination light sources around the alignment mark may be accurately determined according to the alignment mark.
In yet another implementation, before the first terminal recognizes the alignment identifier within the image, the method further includes: the first terminal obtains the one or more alignment identifiers, coordinate information of the one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server. In the implementation, the first terminal can acquire the alignment identifier and the light source topology information of the illumination light source from the server before positioning, so that subsequent positioning is facilitated.
In yet another implementation, the determining, by the first terminal, coordinate information of one or more illumination light sources within the image according to the alignment identifier includes: and the first terminal determines coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the arrangement relation topological graph. In the implementation, the alignment mark and the surrounding illumination light sources have a certain arrangement relation, and after the alignment mark in the image is identified, the illumination light sources around the alignment mark can be determined according to the arrangement relation topological graph, so that the coordinate information of the illumination light sources is obtained.
In yet another implementation, before the first terminal recognizes the alignment identifier within the image, the method further includes: the first terminal obtains the one or more alignment identifiers, the coordinate information of the one or more alignment light sources and the coordinate information of the one or more illumination light sources from a server; and the first terminal determines the position of the first terminal according to the coordinate information of one or more illumination light sources and/or the coordinate information of one or more alignment light sources in the image. In this implementation, after the alignment identifier is identified, the coordinate information of the alignment light source may also be used to perform positioning of the vehicle.
In yet another implementation, the method further includes the first terminal requesting the server assisted positioning: the first terminal sends an auxiliary positioning request to the server, wherein the auxiliary positioning request comprises one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal; and the first terminal receives an alignment identifier sent by the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information. In this implementation, when the first terminal cannot identify the alignment identifier itself, the server may be requested for assisted positioning, and the vehicle positioning may be reliably achieved according to the alignment identifier identified by the server or the generated new alignment identifier.
In yet another implementation, the method further comprises: the first terminal sends a cooperative positioning request; the first terminal receives a cooperative positioning response sent by a second terminal, wherein the cooperative positioning response comprises the position of the second terminal and the position and the shape of a taillight of the second terminal; and the first terminal determines the position of the first terminal according to the position of the second terminal and the position and shape of the rear taillight of the second terminal. In this implementation, when the first terminal cannot recognize the alignment mark itself and the first terminal is blocked by the second terminal in front, the second terminal can be requested to cooperatively locate, so that the vehicle location can be reliably achieved.
In a second aspect, there is provided a method of positioning according to a light source, the method comprising: the server receives an auxiliary positioning request sent by a first terminal, wherein the auxiliary positioning request comprises one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal; the server identifies an alignment mark in the image according to the captured image information; or when the server does not recognize the alignment mark in the image according to the captured image information, the server generates a new alignment mark according to the one or more pieces of information; and the server sends an alignment identifier to the first terminal, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information, and the alignment identifier comprises the one or more alignment light sources. In this aspect, the server may assist the first terminal in identifying the alignment identifier or generating a new alignment identifier, thereby assisting the first terminal in positioning, and implementing high-precision positioning in a shielded environment.
In one implementation, the server generates a new alignment identifier from the one or more pieces of information, including one or more of: the server acquires an alignment mark in a history image acquired by the first terminal according to the history position of the first terminal; the server changes at least one of the arrangement sequence, brightness and color of one or more alignment light sources of the alignment mark in the obtained historical image to generate a new alignment mark; or the server generates a new alignment mark at the side or the rear of the first terminal according to the distribution condition of the cameras of the first terminal.
In yet another implementation, the method further comprises: and the server sends a cooperative positioning request to a second terminal, wherein the cooperative positioning request comprises the identification of the first terminal.
In yet another implementation, the method further comprises: the server sends one or more alignment identifiers, coordinate information of one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources to the first terminal.
In yet another implementation, the method further comprises: the server transmits one or more alignment identifications, coordinate information of the one or more alignment light sources, and coordinate information of the one or more illumination light sources to the first terminal.
In a third aspect, there is provided a method of positioning according to a light source, the method comprising: the second terminal receives a cooperative positioning request, wherein the cooperative positioning request comprises an identification of a first terminal requesting cooperative positioning; and the second terminal sends a cooperative positioning response, wherein the cooperative positioning response comprises the position of the second terminal and the position and the shape of a taillight of the second terminal. In this aspect, the second terminal may assist the first terminal in high-precision positioning in the occluded environment.
In one implementation, the method further comprises: the second terminal identifies an alignment mark in an image acquired by the second terminal, wherein the alignment mark comprises one or more alignment light sources; the second terminal determines coordinate information of one or more illumination light sources in the image according to the alignment mark; and the second terminal determines the position of the second terminal according to the coordinate information of one or more illumination light sources in the image.
In yet another implementation, the method further comprises: the second terminal obtains the one or more alignment identifiers, the coordinate information of the one or more illumination light sources and the arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server.
In yet another implementation, the determining, by the second terminal, coordinate information of one or more illumination light sources within the image according to the alignment identifier includes: and the second terminal determines coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the arrangement relation topological graph.
In yet another implementation, the method further includes the second terminal requesting the server assisted positioning: the second terminal sends an auxiliary positioning request to the server, wherein the auxiliary positioning request comprises one or more of the following information: the identification of the second terminal, the historical position of the second terminal, the captured image information, the current running speed of the second terminal and the distribution condition of cameras of the second terminal; and the second terminal receives an alignment identifier sent by the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information.
In a fourth aspect, an internet of vehicles device is provided, where the internet of vehicles device is configured to implement a behavior function of a first terminal in the method described above. The functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
Specifically, this car networking device includes:
The identification unit is used for identifying an alignment mark in the image acquired by the first terminal, and the alignment mark comprises one or more alignment light sources;
A first determining unit, configured to determine coordinate information of one or more illumination light sources in the image according to the alignment identifier;
And the second determining unit is used for determining the position of the first terminal according to the coordinate information of one or more illumination light sources in the image.
In one implementation, the identification unit is configured to identify the alignment identifier according to a feature of the alignment identifier; wherein the alignment identifier comprises one or more of the following features: the arrangement mode of the alignment light sources, the color of the alignment light sources and the brightness of the alignment light sources.
In yet another implementation, the obtaining unit is configured to obtain, from a server, the one or more alignment identifiers, coordinate information of the one or more illumination light sources, and a topological graph of an arrangement relationship between the one or more alignment identifiers and the one or more illumination light sources.
In yet another implementation, the second determining unit is configured to determine coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the arrangement relation topology map.
In yet another implementation, the first terminal requests the server assisted positioning:
A sending unit, configured to send an assisted positioning request to the server, where the assisted positioning request includes one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal;
And the receiving unit is used for receiving the alignment identifier sent by the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information.
In yet another implementation, the sending unit is configured to send a cooperative positioning request;
The receiving unit is used for receiving a cooperative positioning response sent by the second terminal, wherein the cooperative positioning response comprises the position of the second terminal and the position and the shape of a taillight of the second terminal;
And the second determining unit is used for determining the position of the first terminal according to the position of the second terminal and the position and the shape of the tail lamp of the second terminal.
In a fifth aspect, a server is provided, where the server is configured to implement a behavior function of the server in the above method. The functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
Specifically, the server includes:
the receiving unit is used for receiving an auxiliary positioning request sent by the first terminal, wherein the auxiliary positioning request comprises one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal;
The identification unit is used for identifying the alignment mark in the image according to the captured image information; or (b)
A generating unit, configured to generate a new alignment identifier according to the one or more pieces of information when the identifying unit does not identify the alignment identifier in the image according to the captured image information;
And the sending unit is used for sending an alignment identifier to the first terminal, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information, and the alignment identifier comprises one or more alignment light sources.
In one implementation, the generating unit is configured to obtain, according to a history position of the first terminal, an alignment identifier in a history image acquired by the first terminal; changing at least one of the arrangement sequence, brightness and color of one or more alignment light sources of the obtained alignment marks in the historical image to generate a new alignment mark; or (b)
And the generating unit is used for generating a new alignment mark at the side or the rear of the first terminal according to the distribution condition of the cameras of the first terminal.
In yet another implementation, the sending unit is further configured to send a cooperative positioning request to the second terminal, where the cooperative positioning request includes an identifier of the first terminal.
In yet another implementation, the sending unit is further configured to send, to the first terminal, one or more alignment identifiers, coordinate information of one or more illumination light sources, and a topological graph of an arrangement relationship between the one or more alignment identifiers and the one or more illumination light sources.
In a sixth aspect, an internet of vehicles device is provided, where the internet of vehicles device is configured to implement a behavioral function of a second terminal in the method described above. The functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
Specifically, this car networking device includes:
a receiving unit, configured to receive a cooperative positioning request, where the cooperative positioning request includes an identifier of a first terminal that requests cooperative positioning;
and the transmitting unit is used for transmitting a cooperative positioning response, wherein the cooperative positioning response comprises the position of the second terminal and the position and the shape of the taillight of the second terminal.
The identification unit is used for identifying an alignment mark in the image acquired by the second terminal, and the alignment mark comprises one or more alignment light sources;
A first determining unit, configured to determine coordinate information of one or more illumination light sources in the image according to the alignment identifier;
And the second determining unit is used for determining the position of the second terminal according to the coordinate information of one or more illumination light sources in the image.
In one implementation, the obtaining unit is configured to obtain, from a server, the one or more alignment identifiers, coordinate information of the one or more illumination light sources, and a topological graph of an arrangement relationship between the one or more alignment identifiers and the one or more illumination light sources.
In yet another implementation, the second terminal requests the server assisted positioning:
A sending unit, configured to send an assisted positioning request to the server, where the assisted positioning request includes one or more of the following information: the identification of the second terminal, the historical position of the second terminal, the captured image information, the current running speed of the second terminal and the distribution condition of cameras of the second terminal;
And the receiving unit is used for receiving the alignment identifier sent by the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information.
In a seventh aspect, there is provided an internet of vehicles device comprising a processor, a transceiver, and a memory, wherein the memory is configured to store a computer program, the computer program comprising program instructions, the processor executing the program instructions, the program instructions comprising: identifying an alignment mark in an image acquired by the first terminal, wherein the alignment mark comprises one or more alignment light sources; determining coordinate information of one or more illumination sources in the image according to the alignment mark; and determining the position of the first terminal according to the coordinate information of one or more illumination light sources in the image.
In one implementation, the program instructions for identifying an alignment identifier within the image include: identifying the alignment mark according to the characteristic of the alignment mark; wherein the alignment identifier comprises one or more of the following features: the arrangement mode of the alignment light sources, the color of the alignment light sources and the brightness of the alignment light sources.
In yet another implementation, the program instructions further comprise: and acquiring the one or more alignment identifiers, coordinate information of the one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server.
In yet another implementation, the program instructions for determining coordinate information of one or more illumination sources within the image based on the alignment identifier include: and determining coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the arrangement relation topological graph.
In yet another implementation, the program instructions further include requesting the server assisted positioning: sending a secondary positioning request to the server, the secondary positioning request including one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal; and receiving an alignment identifier sent by the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information.
In yet another implementation, the program instructions further comprise: sending a cooperative positioning request; receiving a cooperative positioning response sent by a second terminal, wherein the cooperative positioning response comprises the position of the second terminal and the position and the shape of a taillight of the second terminal; and determining the position of the first terminal according to the position of the second terminal and the position and shape of the rear taillight of the second terminal.
In an eighth aspect, a server is provided, including a processor, a transceiver, and a memory, wherein the memory is configured to store a computer program, the computer program including program instructions, the processor executing the program instructions, the program instructions comprising: receiving an auxiliary positioning request sent by a first terminal, wherein the auxiliary positioning request comprises one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal; identifying an alignment mark in the image according to the captured image information; when the alignment mark in the image is not recognized according to the captured image information, the server generates a new alignment mark according to the one or more pieces of information; and sending an alignment identifier to the first terminal, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information, and the alignment identifier comprises the one or more alignment light sources.
In one implementation, the program instructions for generating a new alignment identifier according to the one or more pieces of information include one or more of the following: according to the historical position of the first terminal, an alignment mark in a historical image acquired by the first terminal is acquired; changing at least one of the arrangement sequence, brightness and color of one or more alignment light sources of the obtained alignment marks in the historical image to generate a new alignment mark; or generating a new alignment mark at the side or the rear of the first terminal according to the distribution condition of the cameras of the first terminal.
In yet another implementation, the program instructions further comprise: and sending a cooperative positioning request to a second terminal, wherein the cooperative positioning request comprises the identification of the first terminal.
In yet another implementation, the program instructions further comprise: and sending one or more alignment identifiers and coordinate information of one or more illumination light sources to the first terminal.
In a ninth aspect, there is provided an internet of vehicles device comprising a processor, a transceiver, and a memory, wherein the memory is configured to store a computer program, the computer program comprising program instructions, the processor executing the program instructions, the program instructions comprising: receiving a cooperative positioning request, wherein the cooperative positioning request comprises an identification of a first terminal requesting cooperative positioning; and transmitting a cooperative positioning response, the cooperative positioning response including a position of the second terminal and a position and shape of a rear taillight of the second terminal.
In one implementation, the program instructions further comprise: identifying an alignment mark in an image acquired by the second terminal, wherein the alignment mark comprises one or more alignment light sources; determining coordinate information of one or more illumination sources in the image according to the alignment mark; and determining the position of the second terminal according to the coordinate information of one or more illumination light sources in the image.
In yet another implementation, the program instructions further comprise: the one or more alignment identifiers and coordinate information of the one or more illumination sources are obtained from a server.
In yet another implementation, the program instructions further include requesting the server assisted positioning: sending a secondary positioning request to the server, the secondary positioning request including one or more of the following information: the identification of the second terminal, the historical position of the second terminal, the captured image information, the current running speed of the second terminal and the distribution condition of cameras of the second terminal; and receiving an alignment identifier sent by the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information.
In a tenth aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of the above aspects.
In an eleventh aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above aspects.
Drawings
In order to more clearly describe the embodiments of the present invention or the technical solutions in the background art, the following description will describe the drawings that are required to be used in the embodiments of the present invention or the background art.
FIG. 1 is a schematic diagram of a cellular Internet of vehicles;
FIG. 2 is a schematic illustration of indoor V2X communication;
FIG. 3 is a system architecture diagram for positioning according to a light source according to an embodiment of the present application;
FIG. 4 is a schematic illustration of an exemplary vehicle positioning;
FIG. 5 is a schematic diagram of an example alignment marker;
FIG. 6 is a schematic diagram of two forms of an exemplary alignment mark;
FIG. 7 is a schematic flow chart of positioning according to a light source according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an exemplary actual light source layout within a tunnel;
FIG. 9 is a schematic illustration of a light source profile within an exemplary vehicle-acquired image;
FIG. 10 is a schematic flow chart of positioning according to a light source according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of another embodiment of the present application for positioning according to a light source;
FIG. 12 is a schematic illustration of a first vehicle being obscured by a second vehicle in front;
FIG. 13 is a schematic flow chart of a positioning method according to another embodiment of the present application;
Fig. 14 is a schematic structural diagram of an internet of vehicles device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of still another internet of vehicles device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of another internet of vehicles device/server according to an embodiment of the present application.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
Fig. 3 is a diagram of a system architecture for positioning according to a light source according to an embodiment of the present application. The figure includes a car networking terminal 101 and a car networking server 102.
The internet of vehicles terminal 101, which may be simply called a terminal, may be a vehicle having a communication function, a non-motor vehicle, a Road Side Unit (RSU), a portable device, a wearable device, a mobile phone (or called a "cellular" phone), a portable, pocket-sized, hand-held terminal, etc., and the present application does not limit the type of terminal. The vehicle is a typical internet of vehicles terminal, and the following embodiments are described by taking the vehicle as an example, and the embodiments of the present application by taking the vehicle as an example can also be applied to other types of terminals. It should be understood by those skilled in the art that a terminal of the internet of vehicles may specifically perform the method flow in the embodiment of the present application by another terminal or device associated with or coupled to the terminal. For example, when the internet of vehicles terminal is a vehicle, the vehicle may perform the method flow described in the present application through an in-vehicle terminal installed in the vehicle, or an apparatus integrated in the vehicle, wherein the apparatus integrated in the vehicle includes an in-vehicle Box (TELEMATICS BOX, T-Box), a domain controller (domian controller, DC), a multi-domain controller (multi-domian controller, MDC), an on-board unit (OBU), an internet of vehicles chip, or the like.
102 Is a server of internet of vehicles, which may be an internet of vehicles platform or an internet of vehicles server for managing an internet of vehicles terminal. The specific deployment mode of the Internet of vehicles server is not limited, and the Internet of vehicles server can be specifically deployed in a cloud, locally deployed computer equipment and the like.
Positioning according to the light source means that the vehicle is positioned by a vehicle-mounted camera through a visual positioning method. Assuming a set of illumination sources in the occluded environment, a vehicle in the occluded environment can locate the vehicle itself based on the coordinate information of the set of illumination sources in the acquired image. The illumination light source is used for illuminating and is used for illuminating a shielding environment, so that the vehicle can conveniently run. For example, the set of illumination sources may include an illumination source, and the vehicle may determine its own position based on the coordinate information of the illumination source and the distance between the vehicle and the illumination source. As another example, as illustrated in the schematic diagram of the vehicle positioning illustrated in fig. 4, the set of illumination sources includes three illumination sources, and the vehicle can determine its own position according to the coordinate information of the three illumination sources. In fig. 4, the coordinate information (here, only two-dimensional coordinates are exemplified) of three illumination light sources is: (x 1, y 1), (x 2, y 2), and (x 3, y 3), and a distance d1 between the illumination light source having coordinates of (x 1, y 1) and the illumination light source having coordinates of (x 2, y 2), and a distance d2 between the illumination light source having coordinates of (x 2, y 2) and the illumination light source having coordinates of (x 3, y 3). From the above information, a first position of the first vehicle can be calculated.
However, the appearance or characteristics of the illumination sources in general shadowed environments (e.g., tunnels, parking lots, logistics warehouse, indoor bus stops, etc.) are similar, and it is difficult for a vehicle to determine which set of illumination sources is the illumination source in the acquired image, and thus it is impossible to locate itself.
The application provides a concept of an alignment mark, wherein the alignment mark refers to a reference point or a characteristic point when a vehicle is positioned, and the alignment mark is used for assisting the vehicle in determining coordinate information of one or more illumination light sources around the alignment mark. The alignment light source is a light source for constituting an alignment mark feature, and the vehicle recognizes the alignment mark based on the feature of the alignment mark. The alignment of the light source and the illumination light source are merely name distinguishing between their functions, they are essentially light sources. One light source may be used for illumination or alignment Ren Yiyong passes, or may be used for both illumination and alignment. The alignment indicia includes one or more features such as the arrangement of a plurality of alignment light sources, the color of the alignment light sources, the brightness of the alignment light sources, and the like. For example, to express the characteristic of the alignment mark, the alignment light source may be controlled to be turned on or off, or the alignment light source may be controlled to display a different color. The characteristics of any alignment mark are unique in the shielding space, namely, the lamp group can express one alignment mark through the arrangement, brightness and color of the alignment light sources in the lamp group. As shown in the schematic diagram of the alignment mark in fig. 5, an alignment mark 1 is disposed above the tunnel, and an alignment mark 2 and an alignment mark 3 are disposed on both sides of the tunnel. The arrangement of the alignment light sources above and at the two sides of the tunnel and the number of the light sources are different, the brightness of the alignment light sources at the two sides of the tunnel is different, and the first vehicle can identify the alignment mark according to the characteristics of the alignment mark.
The alignment light sources can be additionally arranged outside the illumination light sources in the tunnel, and the illumination light sources can be multiplexed to serve as the alignment light sources. The schematic diagrams of two forms of the alignment mark shown in fig. 6, the left diagram shows that the alignment mark comprises a plurality of alignment light sources arranged independently of the illumination light source, in particular pairs Ji Guangyuan added laterally; the right hand graph shows that the alignment marks are multiplexed illumination sources, in particular multiplexed longitudinal illumination sources, identified by controlling the brightness of the multiplexed illumination sources.
The installation of the alignment marks is completed in the tunnel deployment stage, and the alignment marks are arranged at intervals according to the field of view of a common vehicle in construction design, so that the vehicle can see at least one alignment mark at any place in the tunnel.
The embodiment of the application provides a method and a device for positioning according to a light source.
By adopting the scheme of the embodiment of the application, as the characteristics of each alignment mark are unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment mark, and the high-precision positioning in the shielding environment can be realized.
Fig. 7 is a flowchart of a method for positioning according to a light source according to an embodiment of the present application, and the method may include the following steps:
S101, the first vehicle recognizes an alignment mark in the acquired image.
To facilitate vehicle travel, one or more illumination sources are typically installed within the shielded environment in which the vehicle travels. The shielding environment may be, for example, a tunnel, a parking lot, a logistics warehouse, an indoor bus station, etc., and the following embodiments are described by taking vehicle positioning in the tunnel as an example, and of course, may also be applied to other scenes of vehicle positioning in the shielding environment. An example of a schematic diagram of the actual light source layout within a tunnel is shown in fig. 8, with multiple illumination sources installed over both sides of the tunnel.
Vehicles are typically fitted with one or more cameras, for example, in front of the vehicle, behind the vehicle, or even sideways of the vehicle. The vehicle collects images of the surrounding environment through the camera. The example shown in fig. 9 is an image in a tunnel captured by a vehicle camera, and the outline and position of each illumination source in the tunnel can be clearly seen from each frame of image of the camera.
And the vehicle continuously collects peripheral images through the vehicle-mounted camera in the running process. Each frame of image collected by the vehicle comprises a unique alignment mark, and the vehicle identifies the alignment mark in the image. The alignment mark comprises coordinate information of one or more alignment light sources and characteristics of the alignment mark, and one alignment mark can be uniquely determined according to the coordinate information of the one or more alignment light sources and the characteristics of the alignment mark.
S102, the first vehicle determines coordinate information of one or more illumination light sources in the image according to the alignment mark.
The first vehicle may obtain in advance one or more alignment identifications, coordinate information of one or more illumination light sources, and an arrangement relationship topology (or referred to as "light source topology") of the one or more alignment identifications and the one or more illumination light sources within the tunnel, which are transmitted by the server or in a factory configuration. The periphery of an alignment mark comprises one or more illumination light sources, and the alignment mark has a certain arrangement relation or topological relation with the surrounding one or more illumination light sources. And because each alignment identifier is uniquely set in all images acquired by the vehicles, the coordinate information of one or more illumination light sources in the image acquired by the first vehicle can be determined according to the identified alignment identifier. Specifically, the relationship between the alignment mark and the illumination light source can be aligned through the frame structure at the camera, so that the coordinate information of each illumination light source can be determined.
Wherein the coordinate information of the illumination light source comprises longitude and latitude, altitude of the illumination light source, and because the illumination light source in the image collected by the vehicle is a contour, the coordinate information of the illumination light source also comprises one or more coordinate information of the contour. The plurality of coordinate information identifying the profile is a series of coordinate points of the light source profile.
S103, the first vehicle determines the position of the first vehicle according to the coordinate information of one or more illumination light sources in the image.
After determining the coordinate information of one or more illumination sources within the image acquired by the first vehicle, the position of the first vehicle itself may be calculated based on the coordinate information of the one or more illumination sources and the distances between the plurality of illumination sources in the image. The position of the first vehicle comprises longitude and latitude, altitude coordinates and course angle information of the first vehicle. In addition, the first vehicle may also determine the location of the first vehicle based on one or both of the coordinate information of the one or more aligned light sources, the coordinate information of the one or more illumination light sources. That is, after the first vehicle recognizes the alignment mark, the coordinate information of the alignment light source included in the first vehicle can also be used for positioning the vehicle.
When the positioning is completed according to the coordinate information of the illumination light source, the vehicle compares the actual position and the shape of the illumination light source with the position and the shape of the illumination light source in the image captured by the camera, and calculates the position and the shooting angle of the camera. The mounting position of the camera on the vehicle is determined, so that the position and heading angle of the midpoint of the rear axle of the vehicle can be calculated according to the model and the size of the vehicle, and the vehicle positioning information can be obtained. During the running of the vehicle, images are continuously acquired, and each time, an alignment mark is identified in the images, and coordinate information of each illumination light source around the periphery is determined. On a 120km/h traveling vehicle, the distance traveled by the vehicle per second is approximately 33.3 meters. A typical onboard camera frame rate is 30fps (frame per second), i.e. 30 pictures can be taken per second. Thus in a vehicle traveling at high speed, approximately one image is acquired every 1.1 meter.
Specifically, as shown in the schematic diagram of vehicle positioning in fig. 9, the first vehicle determines coordinate information (only two-dimensional coordinates are illustrated here) of three illumination light sources as: (x 1, y 1), (x 2, y 2), and (x 3, y 3), and a distance d1 between the illumination light source having coordinates of (x 1, y 1) and the illumination light source having coordinates of (x 2, y 2), and a distance d2 between the illumination light source having coordinates of (x 2, y 2) and the illumination light source having coordinates of (x 3, y 3). From the above information, the position of the first vehicle can be calculated. Three illumination light sources with known coordinates can be used as determined features, and the changes of the three illumination light sources are captured and tracked through a camera in the running process of the vehicle, so that the vehicle mileage calculation is combined. The vehicle captures the positions of the three illumination sources at time 1 and captures the positions of the three illumination sources again at time 2, and the position and the posture of the vehicle camera itself can be determined by determining the positions of the illumination sources at time 1 and time 2 and the distance travelled by the vehicle by a triangulation method. A similar algorithm is used in the process of synchronized locating and mapping (simultaneous localization AND MAPPING, SLAM) instant locating and mapping. The other method is that after the coordinates of the illumination light source are determined, the vertical distance between the camera and the light source is calculated according to the tunnel height and the vehicle body height, and then the longitudinal and transverse distances between the vehicle camera and the illumination light source are calculated through a triangulation method.
According to the method for positioning according to the light source, provided by the embodiment of the application, as the characteristics of each alignment mark are unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment mark, so that high-precision positioning in a shielding environment can be realized.
Fig. 10 is a flowchart of another method for positioning according to a light source according to an embodiment of the present application, which may include the following steps:
s201, the first vehicle acquires one or more alignment identifiers, coordinate information of one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server.
Before providing the positioning service, the positioning service provider acquires one or more alignment identifiers, coordinate information of one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources in a tunnel in a mapping mode, establishes connection of each alignment light source to a server, and the server can control brightness, color and the like of the alignment light sources. In addition, the first vehicle may also obtain coordinate information of the one or more aligned light sources from the server.
Immediately before the vehicle enters the tunnel, one or more alignment identifiers, coordinate information of one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources, which are sent by a server (particularly an edge server), can be received. The one or more alignment marks and the one or more illumination sources have a certain arrangement relation or topological relation, and an arrangement relation topological graph or a light source topological graph can be formed, wherein the light source topological graph comprises one or more illumination sources distributed around each alignment mark. The coordinates and alignment marks of the illumination light source can be aligned with the coordinates of the high-precision map owned by the vehicle, and can be directly loaded on the high-precision map owned by the vehicle. The illumination source and the alignment mark are not one point, but describe one area of illumination source/alignment source. For example, for square LEDs, etc., the coordinate information of the illumination source is to be able to describe the outline of the illumination source. The alignment mark needs to be able to mark the area of the alignment mark and describe the area of each alignment light source in the alignment mark, for example, the alignment mark is composed of 4 circular LEDs, and its coordinate information needs to describe the coordinates of the circle and the radius of the circle of each LED lamp, or the outlines of the LEDs and the like are represented by a plurality of coordinate points.
Specifically, immediately before the vehicle enters the tunnel, a positioning request may be sent to the server, where the positioning request includes the vehicle identification, and may further include the current location of the first vehicle. The server determines that the vehicle can use the positioning server in the tunnel according to the vehicle identification, and sends the alignment identification and the coordinate information of the illumination light source to the vehicle.
Of course, the vehicle does not need to acquire the alignment mark and the coordinate information of the illumination light source before entering the tunnel each time, and the alignment mark and the coordinate information of the illumination light source in each tunnel can be configured before the vehicle leaves the factory.
S202, the first vehicle recognizes an alignment mark in an image acquired by the first vehicle, and judges whether the alignment mark is recognized. If yes, go to step S203; otherwise, step S205 is performed.
Specific implementation of the first vehicle to identify the alignment identifier within the image captured by the first vehicle may refer to step S101 of the embodiment shown in fig. 4. But the vehicle may not recognize the alignment mark for various reasons, such as blocked alignment marks, vehicle-mounted camera damage, camera angle problems, etc. Therefore, it is necessary to determine whether the first vehicle recognizes the alignment mark.
If the first vehicle recognizes the alignment mark, steps S203 to S204 are performed, i.e. the first vehicle may be positioned according to the alignment mark recognized by itself, and the specific implementation thereof may refer to steps S102 to S103 of the embodiment shown in fig. 4. Otherwise, steps S205 to S209 are performed, and the first vehicle needs to request the server to assist in positioning.
S203, the first vehicle determines coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the arrangement relation topological graph.
S204, the first vehicle determines the position of the first vehicle according to the coordinate information of one or more illumination light sources in the image.
S205, when the first vehicle does not recognize the alignment mark, the first vehicle sends an auxiliary positioning request to the server.
Correspondingly, the server receives the auxiliary positioning request.
Wherein the assisted positioning request includes one or more of the following information: the identification of the first vehicle, the historical location of the first vehicle, the captured image information, the current travel speed of the first vehicle, the camera distribution of the first vehicle. The historical position of the first vehicle refers to a positioning position which is determined before the first vehicle, and the second position has reference significance for the server to re-recognize the alignment mark. The camera distribution of the first vehicle refers to the mounting position of the cameras on the vehicle.
S206, the server identifies the alignment mark in the image according to the captured image information; or when the server does not recognize the alignment mark in the image according to the captured image information, the server generates a new alignment mark according to one or more pieces of information.
After receiving the auxiliary positioning request, the server determines whether positioning service can be provided for the first vehicle according to the identification of the first vehicle.
Then, since the first vehicle cannot identify the alignment mark, possibly due to damage of its own vehicle-mounted camera or a problem of camera angle, the server first determines whether the alignment mark in the image can be identified according to the image information captured by the first vehicle carried in the auxiliary positioning request.
If the server cannot identify the alignment mark in the image, the server generates a new alignment mark according to the second position of the first vehicle, the current running speed of the first vehicle and the distribution condition of cameras of the first vehicle, and ensures that the vehicle can identify the alignment mark in the own visual field. For example, if the first vehicle is unable to recognize the reason for the alignment mark is that the front camera is obscured by the front cart and the vehicle is equipped with a rear camera, the server may generate a new alignment mark behind the vehicle.
S207, the server sends the alignment mark to the first vehicle.
Correspondingly, the first vehicle receives the alignment identifier sent by the server.
The alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information.
As previously described, if the first vehicle is unable to recognize that the reason for the alignment mark is that the front camera is obscured by the front cart, the server may generate an alignment mark behind the vehicle, and the server may also notify the first vehicle to capture an image via the rear camera.
S208, the first vehicle redetermines the coordinate information of one or more illumination light sources in the image according to the alignment mark sent by the server.
After the first vehicle receives the alignment identifier identified by the server, the specific implementation of determining the coordinate information of the one or more illumination sources in the image collected by the first vehicle according to the alignment identifier identified by the server is the same as the manner that the first vehicle determines the coordinate information of the one or more illumination sources in the image collected by the first vehicle according to the alignment identifier identified by the first vehicle, and the specific reference may be made to step S102 of the embodiment shown in fig. 4.
The first vehicle may also re-capture the image and identify the server-generated new alignment identifier in the re-captured image, and re-determine coordinate information of the one or more illumination sources within the first vehicle-captured image based on the identified generated new alignment identifier. Reference may be made in particular to steps S101 and S102 of the embodiment shown in fig. 4.
S209, the first vehicle redetermines the first position of the first vehicle according to the coordinate information of one or more illumination light sources in the image acquired by the first vehicle.
The first vehicle may refer to step S103 of the embodiment shown in fig. 4 by redefining the specific implementation of the first position of the first vehicle according to the coordinate information of the one or more illumination light sources in the image acquired by the first vehicle.
According to the method for positioning according to the light source, provided by the embodiment of the application, as the characteristics of each alignment mark are unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment mark, so that high-precision positioning in a shielding environment can be realized; and when the alignment mark cannot be identified, auxiliary positioning can be requested to the server, and the vehicle can be positioned reliably according to the alignment mark identified by the server or the generated new alignment mark.
Fig. 11 is a flowchart of another method for positioning according to a light source according to an embodiment of the present application, which may include the following steps:
s301, a first vehicle acquires one or more alignment identifiers, coordinate information of one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server.
A specific implementation of this step may refer to step S201 of fig. 10.
S302, the first vehicle recognizes an alignment mark in an image acquired by the first vehicle, and judges whether the alignment mark is recognized. If yes, step S303 is performed; otherwise, step S305 is performed.
Specific implementation of the first vehicle to identify the alignment identifier within the image captured by the first vehicle may refer to step S101 of the embodiment shown in fig. 4. But the vehicle may not recognize the alignment mark for various reasons, such as blocked alignment marks, vehicle-mounted camera damage, camera angle problems, etc. Therefore, it is necessary to determine whether the first vehicle recognizes the alignment mark.
If the first vehicle recognizes the alignment mark, steps S303 to S304 are performed, i.e. the first vehicle may be positioned according to the alignment mark recognized by itself, and the specific implementation thereof may refer to steps S102 to S103 of the embodiment shown in fig. 4. Otherwise, steps S305 to S313 are performed, and the first vehicle needs to request server-assisted positioning, or request cooperative positioning of the second vehicle.
S303, the first vehicle determines coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the alignment relation topological graph.
S304, the first vehicle determines the position of the first vehicle according to the coordinate information of one or more illumination light sources in the image.
S305, when the first vehicle does not recognize the alignment mark, judging whether the first vehicle is blocked by a second vehicle in front. If yes, go to step S311; otherwise, step S306 is performed.
When the first vehicle does not recognize the alignment mark, it may also be determined whether the first vehicle is blocked by a second vehicle in front. As shown in the schematic view of the first vehicle being blocked by the second vehicle in front of fig. 12, if there is no second vehicle in front of the first vehicle, the first vehicle may capture an image including the alignment mark, and then the first vehicle may not capture an image entirely including the alignment mark because the body of the second vehicle is higher than the first vehicle, and the first vehicle is blocked by the second vehicle. Because the server monitors and manages a plurality of vehicles in the tunnel, if the first vehicle cannot identify the alignment mark and is provided with positioning service by the server, the response speed requirement on the server is higher, and if the single second vehicle can provide positioning assistance for the first vehicle, the response speed is higher, so if the first vehicle is not shielded by the second vehicle, steps S306-S310 are executed, namely the alignment mark is identified or a new alignment mark is generated by the server in an auxiliary mode; if the first vehicle is blocked by the second vehicle, steps S311 to S313 are performed. Determining whether the first vehicle is occluded by the second vehicle may be determined by the first vehicle or by a server. The method of determination is by analyzing the pictures taken by the first vehicle.
S306, the first vehicle sends an auxiliary positioning request to the server, wherein the auxiliary positioning request comprises one or more of the following information: the identification of the first vehicle, the historical location of the first vehicle, the captured image information, the current travel speed of the first vehicle, the camera distribution of the first vehicle.
S307, the server identifies the alignment mark in the image according to the captured image information; or when the server does not recognize the alignment mark in the image according to the captured image information, the server generates a new alignment mark according to one or more pieces of information.
The server determines in this step whether the first vehicle is obscured by the second vehicle by the captured image information transmitted by the first vehicle.
S308, the server sends the alignment identification to the first vehicle.
S309, the first vehicle redetermines coordinate information of one or more illumination light sources in the image acquired by the first vehicle according to the generated new alignment mark or the alignment mark identified by the server.
S310, the first vehicle redetermines the position of the first vehicle according to the coordinate information of one or more illumination light sources in the image acquired by the first vehicle.
The above specific implementation of steps S306 to S310 may refer to steps S205 to S209 of the embodiment shown in fig. 10, respectively.
S311, the server sends a cooperative positioning request to the second vehicle.
Accordingly, the second vehicle receives the cooperative positioning request.
When the first vehicle cannot recognize the alignment mark and the first vehicle is blocked by the second vehicle, the first vehicle may notify the server of the recognition result, and the recognition result may be determined by the server. A cooperative positioning request may then be sent by the server to the second vehicle. The cooperative positioning request includes an identification of the first vehicle.
Of course, the cooperative positioning request may also be sent by the first vehicle directly to the second vehicle.
The second vehicle can determine its own position, as it can capture an image containing the complete alignment mark, resulting in a third position of the second vehicle.
S312, the second vehicle sends a cooperative positioning response to the first vehicle.
Accordingly, the first vehicle receives the cooperative positioning response.
Specifically, the second vehicle responds to the cooperative positioning request sent by the first vehicle, and sends a cooperative positioning response to the first vehicle according to the identification of the first vehicle. Wherein the cooperative positioning response includes a third position of the second vehicle and a position and shape of a rear taillight of the second vehicle.
S313, the first vehicle determines the position of the first vehicle according to the position of the second vehicle and the position and shape of the rear lamp of the second vehicle.
After receiving the cooperative positioning response, the first vehicle can quickly determine the position of the first vehicle according to the position of the second vehicle and the position and shape of the rear taillight of the second vehicle. Specifically, the first vehicle captures and identifies two tail lights of the second vehicle through the front camera, calculates the relative position between the two tail lights of the second vehicle through an optical positioning method, and then accurately determines the current position of the first vehicle according to the current position of the second vehicle. The position of the second vehicle is here the absolute position of the second vehicle. The position of the rear taillight is the relative position of the rear taillight with respect to the position of the second vehicle. The first vehicle needs to calculate the absolute position of the taillight from these two positions.
The second vehicle may also incorporate the position of the second vehicle and the position of the rear taillight, carrying the absolute position and shape of the rear taillight directly in the response.
According to the method for positioning according to the light source, provided by the embodiment of the application, as the characteristics of each alignment mark are unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment mark, so that high-precision positioning in a shielding environment can be realized; and when the alignment mark cannot be recognized and the first vehicle is not blocked by the second vehicle in front, the auxiliary positioning can be requested from the server, the positioning can be relocated according to the alignment mark recognized by the server or the generated new alignment mark, or when the alignment mark cannot be recognized and the first vehicle is blocked by the second vehicle in front, the second vehicle cooperative positioning can be requested, so that the vehicle positioning can be reliably realized.
Fig. 13 is a flowchart of another method for positioning according to a light source according to an embodiment of the present application, which may include the following steps:
S401, the first vehicle and the second vehicle acquire one or more alignment identifiers, coordinate information of one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server.
Each vehicle entering the tunnel may obtain from the server one or more alignment identifications and coordinate information of one or more illumination sources. A specific implementation of this step may refer to step S201 of the embodiment shown in fig. 10.
S402, the second vehicle recognizes the alignment mark in the image acquired by the second vehicle, and judges whether the alignment mark is recognized. If yes, step S403 is executed; otherwise, step S405 is performed.
S403, the second vehicle determines coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the alignment relation topological graph.
S404, the second vehicle determines the position of the second vehicle according to the coordinate information of one or more illumination light sources in the image.
S405, when the second vehicle does not recognize the alignment identifier, the second vehicle sends an auxiliary positioning request to the server, where the auxiliary positioning request includes one or more of the following information: the identification of the second vehicle, the historical location of the second vehicle, the captured image information, the current travel speed of the second vehicle, the camera profile of the second vehicle.
S406, the server identifies an alignment mark in the image according to the captured image information; or when the server does not recognize the alignment mark in the image according to the captured image information, the server generates a new alignment mark according to one or more pieces of information.
S407, the server sends the alignment identification to the second vehicle.
And S408, the second vehicle redetermines the coordinate information of one or more illumination light sources in the image acquired by the second vehicle according to the alignment mark.
S409, the second vehicle redetermines the position of the second vehicle according to the coordinate information of the one or more illumination light sources in the image.
Steps S402 to S409 described above refer to steps S202 to S209 of the embodiment shown in fig. 10, in which the second vehicle performs vehicle positioning.
S410, the first vehicle identifies the alignment mark in the image acquired by the first vehicle, the alignment mark in the image acquired by the first vehicle is not identified, and the first vehicle is blocked by the second vehicle in front.
The first vehicle behind the second vehicle needs to perform vehicle positioning, and the first vehicle itself attempts to identify the alignment mark in the image acquired by the first vehicle, but fails to identify the alignment mark in the image acquired by the first vehicle because the first vehicle is blocked by the second vehicle in front.
S411, the first vehicle sends a cooperative positioning request to the second vehicle.
Accordingly, the second vehicle receives the cooperative positioning request.
The cooperative positioning request message includes an identification of the first vehicle.
S412, the second vehicle sends a cooperative positioning response to the first vehicle.
Accordingly, the first vehicle receives the cooperative positioning response.
Specifically, the second vehicle responds to the cooperative positioning request sent by the first vehicle, and sends a cooperative positioning response message to the first vehicle according to the identification of the first vehicle. Wherein the cooperative positioning response message includes a third position of the second vehicle and a position and shape of a rear taillight of the second vehicle.
S413, the first vehicle determines the first position of the first vehicle according to the position of the second vehicle and the position and shape of the rear lamp of the second vehicle.
After receiving the cooperative positioning response message, the first vehicle can quickly determine the position of the first vehicle according to the position of the second vehicle and the position and shape of the taillight of the second vehicle. Specifically, the first vehicle captures and identifies two tail lights of the second vehicle through the front camera, calculates the relative position between the two tail lights of the second vehicle through an optical positioning method, and then accurately determines the current position of the first vehicle according to the current position of the second vehicle.
According to the method for positioning according to the light source, provided by the embodiment of the application, as the characteristics of each alignment mark are unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment mark, so that high-precision positioning in a shielding environment can be realized; and when the alignment mark cannot be recognized and the first vehicle is blocked by the second vehicle in front, the second vehicle can be requested to cooperatively position, so that the vehicle positioning can be reliably realized.
It will be appreciated by those skilled in the art that the method described above by way of example for the first vehicle and the second vehicle is equally applicable to other types of terminals, e.g. the first vehicle may be a first terminal of another type (e.g. a first motorcycle) and the second vehicle may be a second terminal of another type (e.g. a second motorcycle). The following provides an apparatus according to an embodiment of the present application.
Based on the same concept of the method for positioning according to the light source in the above embodiment, as shown in fig. 14, the embodiment of the present application further provides a device 100 for internet of vehicles, where the device for internet of vehicles may be used to implement the method flow related to the terminal for internet of vehicles in the above embodiment. The internet of vehicles device 100 includes: the identifying unit 11, the first determining unit 12, and the second determining unit 13 may further include an acquiring unit 14, a transmitting unit 15, and a receiving unit 16 (connected by a broken line in the figure); illustratively:
an identifying unit 11, configured to identify an alignment identifier in an image acquired by the first terminal, where the alignment identifier includes one or more alignment light sources;
A first determining unit 12, configured to determine coordinate information of one or more illumination light sources in the image according to the alignment identifier;
A second determining unit 13, configured to determine a position of the first terminal according to coordinate information of one or more illumination light sources in the image.
In one implementation, the identifying unit 11 is configured to identify the alignment identifier according to a feature of the alignment identifier; wherein the alignment identifier comprises one or more of the following features: the arrangement mode of the alignment light sources, the color of the alignment light sources and the brightness of the alignment light sources.
In yet another implementation, the obtaining unit 14 is configured to obtain, from a server, the one or more alignment identifiers, coordinate information of the one or more illumination light sources, and a topological graph of an arrangement relationship between the one or more alignment identifiers and the one or more illumination light sources.
In yet another implementation, the second determining unit 13 is configured to determine coordinate information of one or more illumination light sources around the alignment identifier in the image according to the alignment identifier and the alignment relationship topology map.
In yet another implementation, the first terminal requests the server assisted positioning:
A sending unit 15, configured to send an assisted positioning request to the server, where the assisted positioning request includes one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal;
And a receiving unit 16, configured to receive an alignment identifier sent by the server, where the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the assisted positioning request, or an alignment identifier identified according to the captured image information.
In yet another implementation, the sending unit 15 is configured to send a cooperative positioning request;
the receiving unit 16 is configured to receive a cooperative positioning response sent by a second terminal, where the cooperative positioning response includes a position of the second terminal and a position and a shape of a taillight of the second terminal;
A second determining unit 13 for determining the position of the first terminal according to the position of the second terminal and the position and shape of the rear lamp of the second terminal.
For a more detailed description of the above units reference is made to the description of the first terminal in the method of positioning according to the light source described above with reference to fig. 7, 10, 11, 13.
The internet of vehicles device 100 performs vehicle positioning according to the coordinate information of one or more illumination light sources determined by the alignment mark, so that high-precision positioning in a shielding environment can be realized.
Based on the same concept of the method for positioning according to the light source in the above embodiment, as shown in fig. 15, an embodiment of the present application further provides a server 200, which may be applied to the method flow related to the server in the above method embodiment. The server 200 includes: a receiving unit 21, an identifying unit 22, a generating unit 23, a transmitting unit 24; illustratively:
A receiving unit 21, configured to receive an assisted positioning request sent by the first terminal, where the assisted positioning request includes one or more of the following information: the method comprises the steps of identifying a first terminal, historical positions of the first terminal, captured image information, current running speed of the first terminal and camera distribution conditions of the first terminal;
an identifying unit 22 for identifying an alignment mark in the image based on the captured image information; or (b)
A generating unit 23, configured to generate a new alignment identifier according to the one or more pieces of information when the identifying unit does not identify the alignment identifier in the image according to the captured image information;
And a sending unit 24, configured to send, to the first terminal, an alignment identifier, where the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the assisted positioning request, or an alignment identifier identified according to the captured image information, and the alignment identifier includes one or more alignment light sources.
In one implementation, the generating unit 23 is configured to obtain, according to the historical position of the first terminal, an alignment identifier in a historical image acquired by the first terminal; changing at least one of the arrangement sequence, brightness and color of one or more alignment light sources of the obtained alignment marks in the historical image to generate a new alignment mark; or (b)
And the generating unit 23 is configured to generate a new alignment identifier at a side or a rear of the first terminal according to the camera distribution situation of the first terminal.
In yet another implementation, the sending unit 24 is further configured to send a cooperative positioning request to the second terminal, where the cooperative positioning request includes an identification of the first terminal.
In yet another implementation, the sending unit 24 is further configured to send, to the first terminal, one or more alignment identifiers, coordinate information of one or more illumination light sources, and a topological graph of an arrangement relationship between the one or more alignment identifiers and the one or more illumination light sources.
For a more detailed description of the above units reference is made to the description of the server in the method of positioning according to the light source described above with reference to fig. 7, 10, 11, 13.
The server 200 may assist the first terminal in identifying the alignment identifier or generating a new alignment identifier, and since the feature of each alignment identifier is unique, the vehicle is positioned according to the coordinate information of one or more illumination light sources determined by the alignment identifier, so that high-precision positioning in a shielding environment may be realized.
Based on the same concept of the method for positioning according to the light source in the above embodiment, as shown in fig. 16, the embodiment of the present application further provides an internet of vehicles device 300, where the internet of vehicles device may be applied to execute the method flow related to the second terminal in the above method embodiment, that is, the internet of vehicles device 300 may assist the first terminal to implement high-precision positioning under the shielding environment. The internet of vehicles device 300 includes: the receiving unit 31, the transmitting unit 32, may further include an identifying unit 33, a first determining unit 34, a second determining unit 35, an acquiring unit 36 (connected by a broken line in the figure); illustratively:
A receiving unit 31, configured to receive a cooperative positioning request, where the cooperative positioning request includes an identifier of a first terminal that requests cooperative positioning;
And a transmitting unit 32 configured to transmit a cooperative positioning response, where the cooperative positioning response includes a position of the second terminal and a position and a shape of a taillight of the second terminal.
An identifying unit 33, configured to identify an alignment identifier in the image acquired by the second terminal, where the alignment identifier includes one or more alignment light sources;
a first determining unit 34, configured to determine coordinate information of one or more illumination light sources in the image according to the alignment identifier;
A second determining unit 35, configured to determine a position of the second terminal according to coordinate information of one or more illumination light sources in the image.
In one implementation, the obtaining unit 36 is configured to obtain, from a server, the one or more alignment identifiers, coordinate information of the one or more illumination light sources, and a topological graph of an arrangement relationship between the one or more alignment identifiers and the one or more illumination light sources.
In yet another implementation, the second determining unit 35 is configured to determine coordinate information of one or more illumination light sources around the alignment identifier in the image according to the alignment identifier and the alignment relationship topology map.
In yet another implementation, the second terminal requests the server assisted positioning:
A sending unit 32, configured to send an assisted positioning request to the server, where the assisted positioning request includes one or more of the following information: the identification of the second terminal, the historical position of the second terminal, the captured image information, the current running speed of the second terminal and the distribution condition of cameras of the second terminal;
and a receiving unit 31, configured to receive an alignment identifier sent by the server, where the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the assisted positioning request, or an alignment identifier identified according to the captured image information.
For a more detailed description of the above units reference is made to the description of the second terminal in the method of positioning according to the light source described above with reference to fig. 7, 10, 11, 13.
Fig. 17 is a schematic structural diagram of another internet of vehicles device or server according to an embodiment of the present application. For example, the apparatus for implementing the method flow related to the internet of vehicles terminal in the above embodiment, or the apparatus for implementing the method flow related to the internet of vehicles server may be implemented by the apparatus shown in fig. 17.
The apparatus 400 comprises at least one processor 41, a communication bus 42 and a memory 43. The apparatus 400 may also include at least one communication interface 44. The device 400 may be a computing unit or chip in the vehicle, such as a device that may be an on-board Box (TELEMATICS BOX, T-Box) integrated into the vehicle, or a domain controller (domian controller, DC), or a multi-domain controller (multi-domian controller, MDC), or an on-board unit (OBU), etc.
The processor 41 may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the present invention.
Communication bus 42 may include a pathway to transfer information between the aforementioned components.
The communication interface 44, which may be any transceiver or IP port or bus interface or the like, is used to communicate with internal or external devices or apparatuses or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc. The communication interface 44 of the internet of vehicles device may be a transceiver for communicating with the external network of the vehicle, or may be a bus interface for communicating with other internal units of the vehicle, such as a controller area network (controller area network, CAN) bus interface, etc.
The memory 43 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-only memory, EEPROM), a compact disc (compact disc read-only memory) or other optical disc storage, a compact disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and coupled to the processor via a bus. The memory may also be integrated with the processor.
Wherein the memory 43 is used for storing application program codes for executing the inventive arrangements and is controlled by the processor 41 for execution. The processor 41 is configured to execute the application program code stored in the memory 43, thereby implementing the functions of the internet of vehicles device or the internet of vehicles server in the method of the present application.
In a particular implementation, as one embodiment, processor 41 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 17.
In a specific implementation, the apparatus 400 may include multiple processors, such as the processor 41 and the processor 48 in fig. 17, as one embodiment. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, the apparatus 400 may further comprise an output device 45 and an input device 46, as an embodiment. The output device 45 communicates with the processor 41 and may display information in a variety of ways. For example, the output device 45 may be a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 46 communicates with the processor 41 and may accept user input in a variety of ways. For example, the input device 46 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
When the apparatus shown in fig. 17 is a chip, the functions/implementation of the communication interface 44 may also be implemented by pins or circuits, etc., and the memory is a storage unit in the chip, such as a register, a cache, etc., and the storage unit may also be a storage unit located outside the chip.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the division of the unit is merely a logic function division, and there may be another division manner when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. The coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a read-only memory (ROM), or a random-access memory (random access memory, RAM), or a magnetic medium such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium such as a digital versatile disk (DIGITAL VERSATILE DISC, DVD), or a semiconductor medium such as a Solid State Disk (SSD), or the like.

Claims (9)

1. A method of positioning according to a light source, the method comprising:
A first vehicle-mounted terminal identifies an alignment mark in an image acquired by the first vehicle-mounted terminal, the alignment mark is not identified, the first vehicle-mounted terminal determines whether the first vehicle is blocked by a second vehicle in front, the alignment mark comprises one or more alignment light sources, the periphery of the alignment mark comprises one or more illumination light sources, and the alignment mark has an arrangement relation or a topological relation with the one or more illumination light sources around the alignment mark;
If the first vehicle-mounted terminal determines that the first vehicle is not blocked by the second vehicle, sending an auxiliary positioning request to a server; the assisted location request includes one or more of the following information: the method comprises the steps of identifying a first vehicle-mounted terminal, historical positions of the first vehicle-mounted terminal, captured image information, current running speed of the first vehicle-mounted terminal and camera distribution conditions of the first vehicle-mounted terminal;
The first vehicle-mounted terminal receives the alignment identifier from the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information;
the first vehicle-mounted terminal determines coordinate information of one or more illumination light sources in the image according to the alignment identifier sent by the server, wherein the periphery of the alignment identifier comprises the one or more illumination light sources, and the alignment identifier and the one or more illumination light sources around the alignment identifier have an arrangement relationship or a topological relationship;
the first vehicle-mounted terminal determines the position of the first vehicle-mounted terminal according to the coordinate information of one or more illumination light sources in the image;
If the first vehicle-mounted terminal determines that the first vehicle is blocked by the second vehicle, sending a cooperative positioning request to the second vehicle-mounted terminal, so that the second vehicle-mounted terminal recognizes an alignment identifier in an image acquired by the second vehicle-mounted terminal, determining coordinate information of one or more illumination light sources in the image according to the alignment identifier, and determining the position of the second vehicle-mounted terminal according to the coordinate information of one or more illumination light sources in the image, wherein the cooperative positioning request comprises the identifier of the first vehicle-mounted terminal requesting cooperative positioning;
The first vehicle-mounted terminal receiving a cooperative positioning response from the second vehicle-mounted terminal, the cooperative positioning response including a position of the second vehicle-mounted terminal and a position and shape of a rear taillight of the second vehicle-mounted terminal;
the first vehicle terminal determines a position of the first vehicle terminal based on a position of the second vehicle and a position and shape of a rear taillight of the second vehicle.
2. The method of claim 1, wherein the first vehicle terminal identifying an alignment identifier within the image comprises:
The first vehicle-mounted terminal identifies the alignment mark according to the characteristic of the alignment mark;
Wherein the alignment identifier comprises one or more of the following features: the arrangement mode of the alignment light sources, the color of the alignment light sources and the brightness of the alignment light sources.
3. The method of claim 1 or 2, wherein before the first vehicle terminal recognizes the alignment identifier within the image, the method further comprises:
The first vehicle-mounted terminal acquires the one or more alignment identifiers, coordinate information of the one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server.
4. A method according to claim 3, wherein the first vehicle terminal determining coordinate information of one or more illumination sources within the image from the alignment identifier comprises:
And the first vehicle-mounted terminal determines coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the arrangement relation topological graph.
5. An internet of things device, comprising a processor, a transceiver, and a memory, wherein the memory is configured to store a computer program comprising program instructions that are executed by the processor, the program instructions comprising:
Identifying an alignment identifier within an image acquired by a first vehicle-mounted terminal, the alignment identifier not being identified, the first vehicle-mounted terminal determining whether the first vehicle is obscured by a second vehicle in front, the alignment identifier comprising one or more alignment light sources, the alignment identifier comprising one or more illumination light sources around its periphery, the alignment identifier having an arrangement or topological relationship with the one or more illumination light sources around its periphery;
If the first vehicle-mounted terminal determines that the first vehicle is not blocked by the second vehicle, sending an auxiliary positioning request to a server; the assisted location request includes one or more of the following information: the method comprises the steps of identifying a first vehicle-mounted terminal, historical positions of the first vehicle-mounted terminal, captured image information, current running speed of the first vehicle-mounted terminal and camera distribution conditions of the first vehicle-mounted terminal;
receiving the alignment identifier from the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information;
Determining coordinate information of one or more illumination light sources in the image according to the alignment mark sent by the server, wherein the periphery of the alignment mark comprises the one or more illumination light sources, and the alignment mark and the one or more illumination light sources around the alignment mark have an arrangement relation or a topological relation;
Determining a position of the first vehicle-mounted terminal according to coordinate information of one or more illumination light sources in the image;
If the first vehicle-mounted terminal determines that the first vehicle is blocked by the second vehicle, sending a cooperative positioning request to the second vehicle-mounted terminal, so that the second vehicle-mounted terminal recognizes an alignment identifier in an image acquired by the second vehicle-mounted terminal, determining coordinate information of one or more illumination light sources in the image according to the alignment identifier, and determining the position of the second vehicle-mounted terminal according to the coordinate information of one or more illumination light sources in the image, wherein the cooperative positioning request comprises the identifier of the first vehicle-mounted terminal requesting cooperative positioning;
Receiving a cooperative positioning response from the second vehicle-mounted terminal, the cooperative positioning response including a position of the second vehicle-mounted terminal and a position and shape of a rear taillight of the second vehicle-mounted terminal;
The position of the first vehicle terminal is determined based on the position of the second vehicle and the position and shape of the rear taillight of the second vehicle.
6. The internet of vehicles apparatus of claim 5, wherein the program instructions for identifying an alignment identifier within the image comprise:
Identifying the alignment mark according to the characteristic of the alignment mark;
Wherein the alignment identifier comprises one or more of the following features: the arrangement mode of the alignment light sources, the color of the alignment light sources and the brightness of the alignment light sources.
7. The internet of vehicles apparatus of claim 5 or 6, wherein the program instructions further comprise:
and acquiring the one or more alignment identifiers, coordinate information of the one or more illumination light sources and an arrangement relation topological graph of the one or more alignment identifiers and the one or more illumination light sources from a server.
8. The internet of vehicles apparatus of claim 7, wherein the program instructions for determining coordinate information of one or more illumination sources within the image based on the alignment identifier comprise:
And determining coordinate information of one or more illumination light sources around the alignment mark in the image according to the alignment mark and the arrangement relation topological graph.
9. A computer-readable storage medium having instructions stored therein, the instructions being executable on a computer, the instructions comprising:
Identifying an alignment identifier within an image acquired by a first vehicle-mounted terminal, the alignment identifier not being identified, the first vehicle-mounted terminal determining whether the first vehicle is obscured by a second vehicle in front, the alignment identifier comprising one or more alignment light sources, the alignment identifier comprising one or more illumination light sources around its periphery, the alignment identifier having an arrangement or topological relationship with the one or more illumination light sources around its periphery;
If the first vehicle-mounted terminal determines that the first vehicle is not blocked by the second vehicle, sending an auxiliary positioning request to a server; the assisted location request includes one or more of the following information: the method comprises the steps of identifying a first vehicle-mounted terminal, historical positions of the first vehicle-mounted terminal, captured image information, current running speed of the first vehicle-mounted terminal and camera distribution conditions of the first vehicle-mounted terminal;
receiving the alignment identifier from the server, wherein the alignment identifier is a new alignment identifier generated by the server according to one or more pieces of information in the auxiliary positioning request or an alignment identifier identified according to the captured image information;
Determining coordinate information of one or more illumination light sources in the image according to the alignment mark sent by the server, wherein the periphery of the alignment mark comprises one or more illumination light sources, and the alignment mark and the one or more illumination light sources around the alignment mark have an arrangement relation or a topological relation;
Determining a position of the first vehicle-mounted terminal according to coordinate information of one or more illumination light sources in the image;
If the first vehicle-mounted terminal determines that the first vehicle is blocked by the second vehicle, sending a cooperative positioning request to the second vehicle-mounted terminal, so that the second vehicle-mounted terminal recognizes an alignment identifier in an image acquired by the second vehicle-mounted terminal, determining coordinate information of one or more illumination light sources in the image according to the alignment identifier, and determining the position of the second vehicle-mounted terminal according to the coordinate information of one or more illumination light sources in the image, wherein the cooperative positioning request comprises the identifier of the first vehicle-mounted terminal requesting cooperative positioning;
Receiving a cooperative positioning response from the second vehicle-mounted terminal, the cooperative positioning response including a position of the second vehicle-mounted terminal and a position and shape of a rear taillight of the second vehicle-mounted terminal;
The position of the first vehicle terminal is determined based on the position of the second vehicle and the position and shape of the rear taillight of the second vehicle.
CN201910712800.9A 2019-08-02 2019-08-02 Method and device for positioning according to light source Active CN112305499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910712800.9A CN112305499B (en) 2019-08-02 2019-08-02 Method and device for positioning according to light source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910712800.9A CN112305499B (en) 2019-08-02 2019-08-02 Method and device for positioning according to light source

Publications (2)

Publication Number Publication Date
CN112305499A CN112305499A (en) 2021-02-02
CN112305499B true CN112305499B (en) 2024-06-21

Family

ID=74485524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910712800.9A Active CN112305499B (en) 2019-08-02 2019-08-02 Method and device for positioning according to light source

Country Status (1)

Country Link
CN (1) CN112305499B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI807548B (en) * 2021-10-18 2023-07-01 國立陽明交通大學 Vehicular positioning method and system of utilizing repeated feature object, computer-readable medium
CN117269887B (en) * 2023-11-21 2024-05-14 荣耀终端有限公司 Positioning method, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103038661A (en) * 2010-06-10 2013-04-10 高通股份有限公司 Acquisition of navigation assistance information for a mobile station
CN106205178A (en) * 2016-06-30 2016-12-07 联想(北京)有限公司 A kind of vehicle positioning method and device
CN107076557A (en) * 2016-06-07 2017-08-18 深圳市大疆创新科技有限公司 Mobile robot recognition positioning method, device, system and mobile robot
CN109164412A (en) * 2018-07-06 2019-01-08 北京矿冶科技集团有限公司 A kind of assisted location method based on beacon identification

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013016439A1 (en) * 2011-07-26 2013-01-31 ByteLight, Inc. Self identifying modulater light source
US10378897B2 (en) * 2013-06-21 2019-08-13 Qualcomm Incorporated Determination of positioning information of a mobile device using modulated light signals
WO2016126322A1 (en) * 2015-02-06 2016-08-11 Delphi Technologies, Inc. Autonomous vehicle with unobtrusive sensors
CN105467356B (en) * 2015-11-13 2018-01-19 暨南大学 A kind of high-precision single LED light source indoor positioning device, system and method
CN105592420B (en) * 2015-12-17 2019-11-22 北京百度网讯科技有限公司 Environmental characteristic library generates and indoor orientation method and device based on environmental characteristic library
CN107093319A (en) * 2016-02-17 2017-08-25 华为技术有限公司 A kind of Accident Handling Method and corresponding intrument
CN205844514U (en) * 2016-04-28 2016-12-28 百色学院 A kind of preferably two light source chamber interior locating device and systems based on visible light communication
CN106707237B (en) * 2016-12-13 2020-11-10 华南师范大学 Indoor positioning method and system based on visible light
CN107504960B (en) * 2017-07-28 2019-10-01 西安电子科技大学 Vehicle positioning method and system
CN107703485A (en) * 2017-09-16 2018-02-16 罗智荣 A kind of indoor visible light alignment system
CN107835050B (en) * 2017-11-01 2019-06-18 中国科学院计算技术研究所 A kind of localization method and system based on visible light communication
CN107767661B (en) * 2017-11-23 2020-04-17 李党 Real-time tracking system for vehicle
CN109711327A (en) * 2018-12-25 2019-05-03 深圳市麦谷科技有限公司 A kind of vehicle assisted location method, system, computer equipment and storage medium
CN109785617B (en) * 2019-01-03 2021-01-26 中国联合网络通信集团有限公司 Method for processing traffic control information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103038661A (en) * 2010-06-10 2013-04-10 高通股份有限公司 Acquisition of navigation assistance information for a mobile station
CN107076557A (en) * 2016-06-07 2017-08-18 深圳市大疆创新科技有限公司 Mobile robot recognition positioning method, device, system and mobile robot
CN106205178A (en) * 2016-06-30 2016-12-07 联想(北京)有限公司 A kind of vehicle positioning method and device
CN109164412A (en) * 2018-07-06 2019-01-08 北京矿冶科技集团有限公司 A kind of assisted location method based on beacon identification

Also Published As

Publication number Publication date
CN112305499A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN109739236B (en) Vehicle information processing method and device, computer readable medium and electronic equipment
EP3500944B1 (en) Adas horizon and vision supplemental v2x
KR102221321B1 (en) Method for providing information about a anticipated driving intention of a vehicle
JP6424761B2 (en) Driving support system and center
US20200104289A1 (en) Sharing classified objects perceived by autonomous vehicles
US11335188B2 (en) Method for automatically producing and updating a data set for an autonomous vehicle
US11205342B2 (en) Traffic information processing device
EP3644294A1 (en) Vehicle information storage method, vehicle travel control method, and vehicle information storage device
US20180224284A1 (en) Distributed autonomous mapping
EP3745376B1 (en) Method and system for determining driving assisting data
US20210310823A1 (en) Method for updating a map of the surrounding area, device for executing method steps of said method on the vehicle, vehicle, device for executing method steps of the method on a central computer, and computer-readable storage medium
JP6841263B2 (en) Travel plan generator, travel plan generation method, and control program
KR101439019B1 (en) Car control apparatus and its car control apparatus and autonomic driving method
US20210304607A1 (en) Collaborative perception for autonomous vehicles
CN111353453B (en) Obstacle detection method and device for vehicle
CN110696826B (en) Method and device for controlling a vehicle
CN113167592A (en) Information processing apparatus, information processing method, and information processing program
CN112305499B (en) Method and device for positioning according to light source
CN109029418A (en) A method of vehicle is positioned in closed area
Park et al. Glossary of connected and automated vehicle terms
CN112689241B (en) Vehicle positioning calibration method and device
JP7301103B2 (en) Methods, apparatus, devices, media and systems for operational control
JP5720951B2 (en) Traffic information distribution system, traffic information system, traffic information distribution program, and traffic information distribution method
US11724718B2 (en) Auction-based cooperative perception for autonomous and semi-autonomous driving systems
EP3640679B1 (en) A method for assigning ego vehicle to a lane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220208

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technologies Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant