CN116630577A - AR information display method, vehicle and device under real-time moving live-action - Google Patents

AR information display method, vehicle and device under real-time moving live-action Download PDF

Info

Publication number
CN116630577A
CN116630577A CN202210130293.XA CN202210130293A CN116630577A CN 116630577 A CN116630577 A CN 116630577A CN 202210130293 A CN202210130293 A CN 202210130293A CN 116630577 A CN116630577 A CN 116630577A
Authority
CN
China
Prior art keywords
information
vehicle
display
road
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210130293.XA
Other languages
Chinese (zh)
Inventor
周艳
张�浩
郭泽金
王斌
苏敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210130293.XA priority Critical patent/CN116630577A/en
Publication of CN116630577A publication Critical patent/CN116630577A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/28Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/166Navigation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/177Augmented reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/33Illumination features
    • B60K2360/334Projection means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0181Adaptation to the pilot/driver
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0183Adaptation to parameters characterising the motion of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to the technical field of augmented reality, and discloses an AR information display method, a vehicle and a device under a real-time moving live-action. The method comprises the following steps: acquiring a first road live-action image of the electronic equipment at a first position at a first moment in the moving process; determining a predicted display position of AR information corresponding to a first target object based on the first road live-action image; determining calibration information of a predicted display position based on a movement speed of the electronic device in a process of moving the electronic device from the first position to the second position; acquiring a calibrated display position based on the calibration information; the AR information is displayed in the front view based on the calibrated display position when the electronic device is at the second position. As such, the electronic device may display AR information in the front view based on the calibrated display position. The accuracy of the display position of the AR information can be improved to a certain extent, the sense of reality of virtual-real fit of the AR information and the front visual field is improved, and the user experience is improved.

Description

AR information display method, vehicle and device under real-time moving live-action
Technical Field
The application relates to the technical field of augmented reality, in particular to an AR information display method, a vehicle and a device under real-time moving live-action.
Background
An augmented reality head-up display (Augmented Reality Head up display, AR HUD) is a device that displays an image containing AR information in a forward view, wherein the AR information includes driving assistance and navigation guidance information. Therefore, when a driver observes AR information, the driver can observe the AR information in the front view of the vehicle without looking at the AR information in the mobile phone or the vehicle display screen at a low head, so that line-of-sight switching is avoided, crisis response time is reduced, and driving safety is improved.
The AR information is generally displayed in such a manner that a vehicle collects a road real image first, determines a position where the AR information needs to be displayed in a front view based on the road real image, and then displays the AR information in a corresponding position in the front view.
However, when the vehicle is in a driving state, a certain time is required for the vehicle to determine the display position of the AR information in the front view based on the collected road live-action image, and the vehicle can move during the time, so that when the vehicle displays the AR information, the front view has changed, the display position in the display information of the AR information is inaccurate, the reality that the AR information is in virtual-real fit with the front view is low, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides an AR information display method, a vehicle and a device under real-time moving live-action.
In a first aspect, an embodiment of the present application provides a method for displaying AR information in real-time mobile live-action, which is applied to an electronic device, where the method includes: acquiring a first road live-action image of the electronic equipment at a first position at a first moment in a moving process; determining a predicted display position of AR information corresponding to a first target object based on the first road live-action image; determining calibration information of the predicted display position based on a movement speed of the electronic device during the movement of the electronic device from the first position to the second position; acquiring a calibrated display position based on the calibration information; the AR information is displayed in a front view based on the calibrated display position when the electronic device is at a second position.
It is understood that, according to an embodiment of the present application, the electronic device may display the AR information in the front view based on the calibrated display position. Therefore, the accuracy of the display position of the AR information can be improved to a certain extent, the sense of reality of virtual-real fit of the AR information and the front visual field is improved, and the user experience is improved.
In a possible implementation of the first aspect, before displaying the AR information in the front view based on the calibrated display position when the electronic device is at the second position, the method further includes: and converting the calibrated display information from the image coordinate system into a human eye coordinate system and then into an augmented reality head-up display coordinate system.
In a possible implementation manner of the first aspect, the determining, based on the first road live-action image, a predicted display position of AR information corresponding to a first target object includes: determining spatial feature information based on the first road live-action image; and determining the prediction display information of the AR information corresponding to the first target object based on the spatial feature information.
In a possible implementation of the first aspect, the determining spatial feature information based on the first road live-action image includes: and determining the spatial characteristic information based on a synchronous positioning and mapping technology and the first road live-action image.
In a possible implementation of the first aspect, the determining spatial feature information based on the first road live-action image includes: if a curved or up-down slope road exists in the first road live-action image, converting an image shot by a camera angle of the first road live-action image into an image under a world coordinate system.
In a possible implementation of the first aspect, the AR information includes driving assistance information and/or navigation guidance information.
In a second aspect, an embodiment of the present application provides a vehicle including a head-up display and an AR information display system; the AR information display system is used for acquiring a first road live-action image of the vehicle at a first position at a first moment in the moving process; the AR information display system is used for determining the predicted display position of AR information corresponding to a first target object based on the first road live-action image; the AR information display system is used for determining calibration information of the predicted display position based on the moving speed of the vehicle in the process of moving the vehicle from a first position to a second position; the AR information display system is used for acquiring a calibrated display position based on the calibration information; the AR information display system is used for controlling the head-up display to display the AR information based on the calibrated display position when the vehicle is at the second position.
It is understood that the AR information display system may be, but is not limited to, a smart car system, a controller in a head-up display system, and the like.
In a possible implementation of the first aspect, before the displaying the AR information in the front view based on the calibrated display position, the AR information display system is configured to convert the calibrated display information from the image coordinate system to a human eye coordinate system and then to an augmented reality head-up display coordinate system.
In a possible implementation manner of the first aspect, the AR information display system is configured to determine, based on the first road live-action image, a predicted display position of AR information corresponding to a first target object, including: the AR information display system is used for determining spatial feature information based on the first road live-action image; the AR information display system is used for determining prediction display information of AR information corresponding to the first target object based on the spatial feature information.
In a possible implementation of the first aspect, the determining, by the AR information display system, spatial feature information based on the first road live-action image includes: the AR display system is used for determining the spatial feature information based on synchronous positioning and mapping technology and the first road live-action image.
In a possible implementation manner of the first aspect, the AR information display system is configured to determine spatial feature information based on the first road live-action image, including: the AR information display system is used for converting an image shot by a camera angle of the first road live-action image into an image under a world coordinate system under the condition that the first road live-action image has a curve or an ascending and descending slope.
In a possible implementation of the first aspect, the AR information includes driving assistance information and/or navigation guidance information.
In a third aspect, an embodiment of the present application provides an apparatus, including: one or more memories storing instructions; a processor coupled to the one or more memories, the instructions, when executed by the processor, cause the apparatus to perform the real-time mobile live-action AR information display method of any of the first aspects.
In a possible implementation of the first aspect, the device is a vehicle, a mobile phone, a watch, or AR glasses.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored, where the instructions, when executed on an electronic device, cause the electronic device to perform the real-time mobile live-action AR information display method of any one of the first aspects.
In a fifth aspect, an embodiment of the present application provides a computer program product comprising instructions, which when run on a computer, cause the computer to perform the real-time mobile live-action AR information display method according to any one of the first aspects.
In a sixth aspect, an embodiment of the present application provides a chip, where the chip is coupled to a memory, and is configured to read and execute program instructions stored in the memory, so as to implement the real-time mobile live-action AR information display method according to any one of the first aspect.
Drawings
Fig. 1 is a schematic illustration of a virtual image principle provided in an embodiment of the present application;
fig. 2A shows an application scenario of an AR information display method under a real-time mobile live-action;
fig. 2B shows a schematic view of a vehicle 1 during driving, using a head-up display technique to display a steering icon a001 in a view in front of a windshield;
FIG. 3 is a flow chart showing an AR information display method under real-time moving reality;
FIG. 4A shows various road grade angles to the horizontal of the vehicle 1;
fig. 4B shows vanishing lines P ', G ' or Q ' that may exist in the road-live-action image a 01;
fig. 5A to 5C are schematic diagrams showing the principle of determining the distance between a target object in a road live-action image of a previous frame acquired by the vehicle 1 and a camera;
fig. 6A shows a positional relationship of a camera with respect to a photographed object;
fig. 6B shows the conversion of an image L2 under an image coordinate system captured by a camera angle into a top view L1 under a world coordinate system;
FIG. 7 is a schematic diagram showing the distribution of element position information in a road live-action image;
fig. 8 is a schematic diagram showing a process of determining the positional relationship between the vehicle 1 and a lane line where the vehicle 1 is located;
FIG. 9 shows a general schematic diagram of the conversion of an image taken by a camera into an image at the angle of the human eye;
fig. 10 shows a schematic diagram of a possible functional framework of a vehicle 1.
Detailed Description
The embodiment of the application comprises, but is not limited to, an AR information display method, medium and electronic equipment under a real-time moving live-action.
Some concepts related to embodiments of the present application are described below.
(1) Augmented reality head-up display (Augmented Reality Head up display AR HUD)
An AR HUD is generally provided in a vehicle, and is a display device that projects an image into a front view of a user (e.g., a driver) in the vehicle based on a virtual image principle.
In the embodiment of the present application, the AR HUD is a device that displays an image including AR information on a real road surface and/or an object on a real road surface in a front view. Wherein the AR information includes driving assistance information and/or navigation guidance information. The driving assistance information may include information of steering indication, lane departure, collision warning, following distance, pedestrian prompt, obstacle, navigation route, and the like. The navigation guidance information may include information such as vehicle speed, navigation information, traffic sign, driving information, region of interest (Point of Interest, POI), and the like. The region of interest may be a parking lot, restaurant, mall, theater, gas station, etc.
For example, the AR HUD may implement driving assistance information at a preset distance (e.g., 10 meters) from the vehicle, and the AR HUD may implement navigation guidance information at a preset distance (e.g., 2.5 meters) from the vehicle.
(2) Principle of virtual image
Virtual images are optical phenomena that can be viewed directly by the eye but cannot be received by a light curtain.
In the embodiment of the present application, the imaging principle of the AR HUD in the vehicle may be referred to fig. 1, and fig. 1 is an imaging schematic diagram of the virtual image principle provided in the embodiment of the present application. As shown in fig. 1, the augmented reality head-up display E generates a source image, and emits a light beam L having a certain divergence angle α1 based on the source image, where the light beam L is indicated by light rays L1 and L2, and after being reflected by the mirrors M1 and M2 and the front windshield of the automobile, the light beam L enters the human eye at the divergence angle α2, and the brain will back track the light ray with the experience of "the light ray propagating along a straight line", and consider that the light beam L takes a back-lengthened intersection point as an object point, that is, a virtual image point. The content of the virtual image point may be the aforementioned AR information.
As described above, when the vehicle is in a driving state, a certain time is required for the vehicle to determine the position of the AR information displayed in the front view based on the collected road live-action image, and during this time, the vehicle will move, so when the vehicle displays the foregoing AR information, the front view has changed, so that the display position in the display information of the AR information is inaccurate, the sense of reality that the AR information is in virtual-real fit with the front view is low, and the user experience is low.
For example, fig. 2A shows an application scenario schematic of an AR information display method under a real-time mobile live-action. As shown in fig. 2A, the vehicle 1 travels in the current environment, the vehicle 1 is at a position P1 at a time t0, at a position P2 at a time t1, and the vehicle 1 has moved from the position P1 to the current position P2 from the time t0 to the time t1, and the vehicle 1 has moved by a distance d1.
Fig. 2B shows a schematic diagram of a vehicle 1 displaying a steering icon a001 in a view in front of a windshield using a head-up display technique during running.
As shown in fig. 2B, when the vehicle 1 is in a driving state, and the navigation application scene is to display a steering icon a001 at a turn where the traffic light is located, wherein the position of the position at the turn is determined by the vehicle 1 according to at least the position of the object, the traffic light, and the steering icon a001 is used to prompt the vehicle 1 to change the driving direction at the turn where the traffic light is located.
At time t0, the vehicle 1 is at a position P1, and the vehicle 1 may acquire a road live-action image and then obtain a position P1' at which the steering icon a001 is displayed based on the acquired road live-action image. While the steering icon a001 is displayed at time t1, the vehicle 1 has moved from the position P1 to the current position P2, and has moved by the distance d1. Thus, the front view has changed, and the distance between the vehicle 1 and the turn where the traffic light is located has been shortened. The steering icon a001 is still displayed according to the distance between the turning position where the traffic signal lamp is located and the position of the vehicle 1, so that the steering icon a001 is displayed at the position P1', the display position of the steering icon a001 does not correspond to the position where the steering icon a001 should be actually displayed, the reality that the steering icon a001 is in virtual-real fit with the front view is low, and the user experience is reduced.
In order to solve the technical problems in the background art, the embodiment of the application provides an AR information display method under real-time moving live-action. The method comprises the following steps: the vehicle acquires a road real image at a first position at a first time during traveling, determines a predicted display position of AR information corresponding to a target object based on the road real image, and in determining the predicted display position of AR information corresponding to the target object (or: target point) based on the road real image, the vehicle 1 has traveled from the first position to a second position at a second time, calibrates the predicted display position of the AR information based on the first position and the second position at the second position, and displays the AR information in a front view of the vehicle 1 based on the calibrated predicted display position.
Therefore, the accuracy of the display position of the AR information is improved, the sense of reality of virtual-real fit of the AR information and the front visual field is improved, and the user experience is improved.
For example, as shown in fig. 2B, in the navigation application scenario, if the vehicle 1 is in a driving state, a road live-action image is acquired at a position P1 at a time t0, a predicted display position of AR information (i.e., a steering icon a 001) corresponding to a turn where a target object such as a traffic light is located is determined as a position P1' based on the road live-action image, the vehicle 1 has traveled from the position P1 to a position P2 during determination of the predicted display position of the steering icon a001 corresponding to the target object based on the road live-action image, the vehicle 1 calibrates the predicted display position of the AR information (i.e., the steering icon a 001) based on the position P1 and the position P2 at the time t1, obtains the calibrated position as a position P2', and displays the steering icon a001 in a front view of the vehicle 1 based on the calibrated predicted display position P2 '.
Based on the above, the vehicle 1 obtains a more accurate display position (position P2') of the steering icon a 001. Rather than partially covering the position P1' of the support column of the traffic light. Therefore, the accuracy of the display position of the steering icon A001 can be improved, the sense of reality of virtual and real lamination of the steering icon A001 and the front visual field is improved, and the user experience is improved.
The AR information display method under the real-time moving live-action provided by the embodiment of the application can be suitable for the AR live-action navigation process, such as turning, changing lanes and the like. But is not limited thereto.
The AR information display method under the real-time moving live-action provided by the embodiment of the application can be applied to AR live-action navigation of a vehicle, AR live-action navigation of a mobile phone, AR live-action navigation of a smart watch, AR live-action navigation of AR glasses and the like, but is not limited to the method.
It is understood that the AR information may be displayed in text, image, three-dimensional (3D) model, etc. The steering icon A001 can be displayed in a 3D mode, so that better stereoscopic impression and sense of reality are presented, the visual effect of real AR information display is improved, and better navigation guidance is provided for a user.
Fig. 3 is a schematic flow chart of an AR information display method under a real-time mobile live-action according to an embodiment of the present application. As shown in fig. 3, the execution subject of the flow may be the vehicle 1, and the flow includes the steps of:
301: spatial feature information in a running environment of the vehicle 1 at a first position at a first time is acquired.
It will be appreciated that the spatial signature information may include road signature information and/or road spatial information, the road signature information may include objects in the driving environment of the vehicle 1, the objects may include buildings, vehicles in front 1, traffic lights, lane lines, utility poles, road edges, intersections, etc. The road space information may include position information of the vehicle 1 in an actual environment, and the position information may include a vehicle posture of the vehicle 1, which may be understood to mean an up-down pitch angle, a left-right pitch angle, or the like of the vehicle with respect to a road, for example. The vertical pitch angle refers to an included angle between a vehicle and a parallel line where a road ahead is located. The road space information may also include position information of objects in the actual environment in which the vehicle 1 is located.
In some embodiments, the vehicle 1 may acquire a road live-action image of the driving environment of the vehicle 1, and extract spatial feature information from the acquired road live-action image. Alternatively, the road space information may be obtained by means of synchronous localization and mapping (Simultaneous Localization And Mapping, SLAM) (or technology), wherein the goal of SLAM is to construct the map of the surrounding environment in real time from the road live-action image obtained by the vehicle 1. It will be appreciated that on this basis, the vehicle 1 may further infer positional information of objects in the running environment in which the vehicle 1 is located based on the geometric perspective and the surrounding environment map. In other embodiments, if the vehicle 1 has a high-precision map and a laser radar, the vehicle 1 may obtain the absolute position (or referred to as the absolute geographic position) of each object in the vehicle 1 and the driving environment through the high-precision map and the laser radar. The absolute geographic position, namely the longitude and latitude position, is based on the whole earth as a reference system and the longitude and latitude as a measurement standard, and each place on the earth has a unique longitude and latitude value. For example, the vehicle 1 may obtain the absolute position of the vehicle 1 and the AR information marking location (e.g., the turn where the traffic light is located) through a high-precision map and a lidar.
In addition to acquiring the road space information described above by the SLAM method, in other embodiments, the vehicle 1 may acquire the road space information by correcting the road space information acquired by the SLAM method. For example, the vehicle 1 may determine the angle between the horizontal line of the vehicle 1 and the parallel line of the road ahead according to the angle between the line of the user's eyes in the vehicle 1 and the line of the road in the road live-action image. Where "vanishing line" refers to the visual intersection of parallel lines.
For example, fig. 4A shows different road grade angles from the horizontal at which the vehicle 1 is positioned. As shown in FIG. 4A, O c Representing the position of the user's eyes in the vehicle 1, the user's eye position O c And the straight line and the horizontal line O where the connecting lines of the vanishing lines P ', G', Q ', H' are located c The included angles of H are 0, alpha, phi and beta respectively. Fig. 4B shows vanishing lines P ', G', Q ', H' which may exist in the road live-action image a 01.
The vanishing line is located differently in the road live-action image a01, and the horizontal line on which the vehicle 1 is located is different in an angle from the road. If the vanishing line in the road live-action image A01 is P' according to the eye O c The distance from the plane of the road-realistic image a01 displayed on the windshield of the vehicle 1, the distance from the vanishing line P 'to the edge of the road-realistic image a01, and the tangent and arctangent trigonometry formula, the position O of the user's eyes in the road-realistic image a01 in the vehicle 1 is obtained c A line O with the vanishing line P c P' and horizontal line O c And an included angle alpha of H.
Due to the position O of the user's eyes in the road live-action image a01 c A line O with the vanishing line P c P' and horizontal line O c H is included as an included angle alpha and the position O of the eyes of the user c A straight line with the vanishing line P and a horizontal line O with the vehicle 1 c The angle of the planes parallel to H is also the angle alpha, and the angle between the plane of the vehicle 1 and the horizontal line O c The angle between the plane parallel to H and the road in front of the vehicle 1 is also the angle α. Thereby obtaining the horizontal line O with the vehicle 1 c The included angle between the plane parallel to H and the road 3 is an included angle alpha.
Similarly, if the vanishing line in the road live-action image a01 is G', the image is taken by the eye O c The distance from the plane of the road real image A01, the distance from the vanishing line G' to the edge of the road real image A01, the tangent formula and the arc tangent triangle formula, and the horizontal line O where the vehicle 1 is located are obtained c The included angle between the parallel plane H and the road 2 is phi.
Similarly, if the vanishing line in the road live-action image A01 is Q', the image is taken as the eye O c The distance from the plane of the road real image A01, the distance from the vanishing line Q' to the edge of the road real image A01, the tangent formula and the arc tangent triangle formula, and the horizontal line O where the vehicle 1 is located are obtained c The included angle between the plane parallel to H and the road 3 is an included angle beta.
During the running of the vehicle 1, the vehicle 1 may collect a front frame road live-action image (for example, a road live-action image at a first location) and a rear frame road live-action image (for example, a road live-action image at a second location) through the camera, and the vehicle 1 runs a preset distance during the process from the collection of the front frame road live-action image to the collection of the rear frame road live-action image by the camera. Then, the vehicle 1 may determine the distance between the target object in the road live-action image of the rear frame and the camera acquired by the vehicle 1 based on the preset distance traveled by the vehicle 1, and the movement variation and imaging variation (viewing angle, geometric perspective, distortion, etc.) of the target object in the road live-action image of the front frame and the road live-action image of the rear frame.
For example, fig. 5A to 5C show schematic diagrams of determining a distance between a target object in a road live-action image of a previous frame acquired by the vehicle 1 and a camera. For example, as shown in fig. 5A, the vehicle 1 acquires a current frame road live-action image and a previous frame road live-action image through the camera, and the vehicle 1 travels a preset distance d in the process from the camera acquiring the previous frame road live-action image to the camera acquiring the current frame road live-action image. Since the imaging change is equally scaled down and no distortion occurs, the vehicle 1 can determine the preset distance d travelled by the vehicle 1 based on the preset distance d travelled by the vehicle 1 and the amount of movement change of the target object in the current frame road live-action image and the previous frame road live-action image (for example, as shown in fig. 5B: the ratio K1 of the area of the target object to the area of the photographing imaging screen and the ratio K2 of the area of the target object to the area of the photographing imaging screen), by the following formula:
d=D 2 *(K 2 -K 1 )/K 1
Wherein, as shown in FIG. 5C, D 2 The moment when the camera collects the road live-action image of the previous frame of the current frame is represented, and the distance between the vehicle 1 and the target object is represented.
It will be appreciated that since the camera is oriented obliquely downward but not fully downward, the resulting road live-action image may have keystone distortion, i.e., a large object appearing near the road live-action image and a small object appearing far the image. The trapezoidal distortion can cause larger errors in subsequent navigation line identification and calculated deflection angles, so that the aims of accurate positioning and traveling along the lines cannot be achieved. Therefore, the vehicle 1 can directly derive the mapping relation between the image coordinate system and the real world coordinate system according to the imaging principle of the camera optical lens and other geometric relations, so as to convert the road real image under the image coordinate system shot by the camera view angle into the overlooking road real image under the real world coordinate system.
In some embodiments, if the vehicle 1 determines that the road in the road live-action image has a curve or has an ascending or descending slope, the curve needs to be calculated according to the scale of the curve line in the geometric perspective space, and the curve line is fitted to the front Jing Wanlu radian or the ascending or descending slope under the overlooking angle under the real world coordinate system.
Specifically, the vehicle 1 derives the correspondence between the image coordinate system and the real world coordinate system according to the imaging principle of the optical lens of the camera. The actual horizontal and vertical distances between the object and the bottom of the camera can be deduced as long as the pixel coordinates of the object in the image coordinate system are known on curved roads and on up-and-down slopes.
For example, fig. 6A shows a positional relationship of a camera with respect to a photographed object. Fig. 6B shows the conversion of an image L2 in an image coordinate system captured by a camera angle into a top view L1 in a world coordinate system.
The vehicle 1 can calculate the position of each pixel in the image L2 ("square" image B) in the image coordinate system photographed at the camera angle corresponding to the top view L1 ("trapezoidal" image a) in the world coordinate system by the following formula, that is, converting the "perspective view" into the "top view".
Y1=H*tan(α+Δθ);
Here, (X0, Y0) represents coordinates of a pixel point in an image B (image coordinate system) photographed at a camera angle, and (X1, Y1) represents coordinates of a pixel point in a top view a converted from the image B into a world coordinate system. Alpha represents a pitch angle between a straight line of the camera and the bottom edge of the image in the side view and the vertical direction of the camera, theta represents a vertical field angle between a straight line of the camera and the bottom edge of the image in the side view and the vertical direction of the camera, H represents the height of the camera and the ground, the actual distance between the bottom edge of the image and the camera is Dmin, and the actual distance between the top edge of the image and the camera is Dmax.
Beta represents the horizontal field angle of the camera in the top view, width represents the image width, height represents the image height, D1 represents the difference between the coordinates of the camera and the focal point of the horizontal plane line, horizontal distance X1 from the camera and vertical distance Y1 from the camera.
Further, after obtaining the road live-action image in the world coordinate system through the SLAM mode, the vehicle 1 may also calibrate and obtain the road space information in the road live-action image through the SLAM mode through other modes. For example, the vehicle 1 may calculate the position information of each element from the road live-action image, and calibrate the position information of other elements except the vehicle 1 in the road live-action image by using the more accurate position information of the vehicle 1 obtained by the sensing system of automatic driving.
For example, fig. 7 shows a schematic diagram of element position information distribution in a road live-action image. As shown in fig. 7, it is assumed that the vehicle 1, the vehicle 2, the object 3, the object 4, and the object 5 are distributed in the road live-action image. The positional information of the vehicle 1, the vehicle, the object 3, the object 4, and the object 5 obtained by the SLAM method are (x 1, y1, z 1), (x 2, y2, z 2), (a, b, c), (d, e, f), and (h, i, j), respectively. Because the accuracy of acquiring the road space information of the object in the road live-action image in the SLAM mode is low, the accuracy is improved. The vehicle 1 can accurately sense the positions of the vehicle 1 and the vehicle 2 based on the automatic driving sensing system to be (X1 ', y1', z1 ') and (X2', y2', z 2'), and calculate an offset matrix δt based on the difference delta (X1, y1, z 1) between (X1 ', y1', z1 ') and (X1, y1, z 1), or the difference delta (X2, y2, z 2') between (X2, y2, z2 ') and (X2), and the internal reference transformation matrix information known by the vehicle 1, and the recalibrated spatial position information is X' =x×δt, so as to obtain the position information of the calibrated vehicle 1, vehicle 2, object 3, object 4 and object 5 to be (X1 ', y1', z1 '), (X2', y2', z 2'), (a, b, c) ×δt, (d, e, f) ×δt and (h, i, j) ×δt, respectively.
It will be appreciated that the road space information is obtained in addition to the above manner. As described above, when the vehicle 1 can also call up the high-precision map and the lidar, the self-localization can be obtained based on the high-precision map and the lidar.
When the vehicle 1 can only call standard definition (Standard Definition, SD) maps and common global positioning system (Global Positioning System, GPS)/beidou positioning, due to interference of shielding, multipath and the like, the lane information is unknown, and the vehicle 1 can identify the lane line position and correct the lane where the level is located by combining information such as hard disk video recorder (Digital Video Recorder, DVR) driving recorder vision, inertial navigation, dual-frequency carrier phase difference (Real Time Kinematic, RTK), vehicle speed, maps, laser radar and the like. For example, the vehicle 1 may also obtain data through a camera, a radar, a lidar, a global navigation satellite system, an inertial sensor, and a standard definition map on the vehicle 1, and process the data to obtain a road structure, and obtain a positional relationship between the vehicle 1 and the road. For example, which lane line the vehicle 1 is on.
In order to identify the position (lateral and/or longitudinal position) of the vehicle 1, for example, sub-meter level high-precision positioning is performed. Fig. 8 shows a schematic diagram of a process of determining the positional relationship between the vehicle 1 and a lane line where the vehicle 1 is located. As shown in fig. 8, the process of the vehicle 1 determining the positional relationship of the vehicle 1 and the lane line where it is located is as follows:
A. And obtaining the target distance of the multi-frame fusion information image. In this step, the vehicle 1 may acquire sensor data from the sensors and identify the markers on the road from the sensor data and further determine the distance between the vehicle 1 and the identified markers on the road. Wherein the marker may be a utility pole, traffic light, etc. The sensor may be a camera, radar, lidar or the like. For example, the sensor data may include a road live-action image collected by the camera, and the vehicle 1 may extract a marker on the road from the road live-action image collected by the camera. The sensor data may also include a distance measured by the lidar to a marker on the roadway.
B. A region of travel (freeservice) is acquired. In this step, the vehicle 1 may acquire sensor data from a camera, radar, lidar, or the like, and derive a position of a road on which the vehicle 1 can travel from the sensor data.
C. And obtaining the traffic flow. The traffic flow refers to the vehicles 1 around the vehicle 1. In this step, the vehicle 1 may acquire sensor data from a camera, radar, lidar, or the like, and recognize that there are several traffic flows in front of the vehicle 1 based on the sensor data, and further determine on which lane the vehicle 1 is on based on the several traffic flows.
D. Static element perception. By way of example, the static element may be a lane line, a road edge, a stop line, a pavement marker, a cone, a traffic light, or the like. In this step, the vehicle 1 may acquire sensor data from a camera, radar, lidar, or the like, and identify the static element from the sensor data. The static input recognition results may be used to assist in locating the position of the vehicle 1.
E. And (5) sensing the information of the own vehicle. By way of example, the vehicle information may be the speed, acceleration, road pitch angle, head steering, body orientation, etc. of the vehicle. At this step, the vehicle 1 may acquire sensor data from a global navigation satellite system, an inertial sensor, or the like, and recognize own vehicle information from the sensor data.
F. And (5) laser assisted positioning. In this step, the vehicle 1 may calculate the lane line in which the vehicle 1 is located based on the information obtained in steps A, B, C and D. It can be understood that if the current environment of the vehicle 1 is a dark environment, the road image captured by the camera of the vehicle 1 cannot accurately locate the lane line of the vehicle 1, and in this case, the laser radar can still accurately locate the lane line of the vehicle 1 in the dark environment, so that the vehicle 1 can use the laser to assist in locating the lane line of the vehicle 1.
G. And (5) road structure reasoning. In this step, the vehicle 1 calculates the road structure of the road on which the vehicle 1 is located from the information obtained in steps A, B, C and D.
H. And calculating the self-vehicle pose. In this step, the vehicle 1 calculates the pose of the vehicle 1 from the data obtained in step B.
I. And inquiring the peripheral road network. In this step, the vehicle 1 may query the surrounding road network information according to the standard definition map.
J. And extracting crossing key information. In this step, the vehicle 1 may extract the intersection key information according to the result of the surrounding road network obtained in step I. By way of example, the intersection key information may be a turn where a traffic light is located, etc. The intersection key information can be used to assist in locating the position of the vehicle 1.
K. And (5) reasoning of complex road conditions. In this step, the vehicle 1 may obtain the positional relationship of the vehicle 1 and the road structure according to steps G and H.
And L, extreme weather road perception. In this step, the vehicle 1 may obtain the positional relationship of the vehicle 1 and the road structure according to step F.
In the complex road condition estimation in the above-described scheme, the lane line where the vehicle 1 is located may be confirmed by the following two ways:
fusion positioning technology 1: the vehicle 1 recognizes the lane lines and the road edges through a visual algorithm, can deduce the lane which is far to the left or the right of the current lane, and can directly recognize the lane index if the number of lanes per se is small and the front vehicle is not seriously blocked; if the current road section is blocked, the traffic flow can be identified through a visual algorithm to deduce the position of the lane, and finally the total number of lanes of the current road section of the map is fused to obtain the current lane index.
Fusion positioning technology 2: during the running of the vehicle 1, a history information list (including the current lane index, the total number of road lanes and the like) of the lanes where the vehicle 1 is located can be maintained, the lane change behavior of the vehicle 1 is judged through the angle of the lane lines and the vehicle direction angle, and the lanes where the vehicle 1 is located are updated by combining the map lane number information.
302: and determining the predicted display position of the AR information corresponding to the target object based on the spatial feature information.
For example, as shown in fig. 2B, in the navigation application scenario, if the vehicle 1 is in a driving state, the vehicle 1 acquires a road real image at a position P1 at a time t0, and determines a predicted display position of a turn icon a001 at a turn where the traffic light is located as a position P1' based on the road real image, wherein the position of the turn is determined by the vehicle 1 according to at least the position of the object, which is the traffic light.
It is understood that the AR information may include display contents in addition to the display position, wherein the display contents of the AR information may include, but are not limited to, an AR image required to be superimposed on a live-action for route navigation, an obstacle, a region of interest, and the like.
For example, the navigation information of a square obstacle is trapezoid at the view angle a, and is trapezoid at the view angle B. Wherein, the display of the obstacle navigation information at the view angle B is composed of position transformation of a plurality of position points. It will be appreciated that the final display position of the navigation, obstacle, and region of interest is varied by a key location point of the plurality of location points.
303: based on the relationship between the second position and the first position to which the vehicle 1 travels at the second time, the predicted display position of the AR information is calibrated, and the calibrated AR information is obtained.
It will be appreciated that when the vehicle 1 is in a driving state, it takes a certain time for the vehicle 1 to determine the position at which the AR information is displayed in the front view based on the collected road live-action image, and during this time, the vehicle 1 may move, and thus, when the vehicle 1 displays the aforementioned AR information, the front view has changed, and the predicted display position of the AR information deviates from the position at which the AR information should actually be displayed.
To compensate for this deviation, the vehicle 1 may make location corrections and compensation before AR imaging according to the navigation route and the time delay. Specifically, the vehicle 1 may read the AR information and the identification track information of the navigation map, the current position of the vehicle 1, and estimate a new position to be reached by the vehicle 1 according to the travel speed and the delay of the vehicle 1 according to the route.
It will be appreciated that an Inertial Navigation System (INS) is a navigation parameter resolving system that is a gyro and accelerometer sensitive device that establishes a navigation coordinate system from the output of the gyro and that resolves the speed and position of the vehicle 1 in the navigation coordinate system from the accelerometer output. When the vehicle 1 judges that the navigation signal is not good, the location correction and compensation can be performed before the AR imaging according to inertial navigation and time delay.
Specifically, assuming that the predicted time for the vehicle 1 to calculate the position displayed by the AR information based on the road live-action image is p seconds, the current running speed of the vehicle 1 is estimated to be v (m/s) from the tire rotation speed. And then, according to the AR information, the current road needs to rotate anticlockwise by an angle a relative to the forward eastern direction. Establishing a coordinate system, taking the direction of the right east of the direction of the vehicle 1 as an x axis and the direction of the right north as a y axis, and taking the current vehicle 1 as an origin (0, 0), wherein the calibrated AR display information is obtained by subtracting a prediction deviation from a position displayed by the AR information calculated based on the road live-action image: (v p sin (a), v p cos (a)).
It will be appreciated that the predicted compensation for delay is not based solely on visual, inertial navigation trajectories, but that the vehicle 1 may also read information in the navigation map to compensate for the deviation.
In some embodiments, when the vehicle 1 selects the road feature information in the road live-action image based on the SLAM, the road feature information of the preset distance from the position where the AR information is located is mainly selected, and the three-dimensional coordinates of the road feature information in the stereoscopic space are estimated according to the geometric perspective method.
304: and converting the calibrated display information from the image coordinate system to an AR HUD coordinate system.
It will be appreciated that the camera is typically provided on a bumper of the vehicle 1, under the human eye, in order to enhance the user experience, since the angle at which the human eye sees the image is different from the angle at which the camera images. It is necessary to convert the image taken by the camera from an image coordinate system to an image at the angle of the human eye. The image shot by the camera can be a road live-action image. The vehicle 1 may convert the road live-action image under the image coordinate system into a road live-action image of a human eye angle, and convert the calibrated display information from the image coordinate system into a human eye coordinate system based on the image of the human eye angle, and then from the human eye coordinate system into an AR HUD coordinate system.
For example, fig. 9 illustrates a general schematic diagram of converting an image captured by a camera into an image at the angle of the human eye, according to some embodiments of the application.
E represents an AR HUD, A represents a camera, B represents an origin of a center axis of a rear wheel of the vehicle 1, C represents a world coordinate system, and D represents human eyes.
As shown in fig. 9, the vehicle 1 needs to convert the road live-action image photographed by the camera from the angle imaged by the camera to the human eye angle. The matrix transformation parameters required for the foregoing conversion may be default parameters carried by the vehicle 1 itself.
The aforementioned angular changes may be transformed by a variety of coordinate systems, for example 5. The following describes the transformation of traffic characteristic information between the angle imaged by the camera and the angle of the human eye, taking 5 as examples.
Specifically, the road live-action image captured by the camera on the vehicle 1 can be sequentially converted from the image coordinate system to the camera coordinate system, from the camera coordinate system to the vehicle 1 coordinate system, from the vehicle 1 coordinate system to the world coordinate system, from the world coordinate system to the human eye coordinate system, and from the human eye coordinate system to the AR HUD coordinate system.
The following describes the formula for each coordinate system conversion step based on the above 5 coordinate system conversion steps:
(A) The image coordinate system is converted into a camera coordinate system:
wherein f in the above formula represents the focal length of the camera, (x, y) is a point of the road live-action image coordinate system, and (Xc, yc, zc) is a point of the camera coordinate system.
(B) The camera coordinate system is converted into a vehicle 1 coordinate system:
wherein, R in the above formula is a rotation matrix of 3*3, representing an angular difference of the camera relative to the forward direction of the vehicle 1, t represents a translation matrix of 3*1, representing a translation of the camera relative to the rear axis coordinate system of the vehicle 1.
(C) Conversion of a vehicle 1 coordinate system into a world coordinate system
Wherein, R in the above formula is a rotation matrix 3*3, representing an angular difference between the advancing direction of the vehicle 1 and the world coordinate system, t represents a translation matrix 3*1, representing a translation of the rear axle of the vehicle 1 relative to the origin of the world coordinate system.
(D) World coordinate system converted into human eye coordinate system
Wherein, R in the above formula is a rotation matrix of 3*3, representing an angle difference of the forward direction of the vehicle 1 relative to the human eye, t represents a translation matrix of 3*1, representing a translation of the rear axle of the vehicle 1 relative to the human eye.
(E) Conversion of human eye coordinate system into AR HUD coordinate system
Wherein, (x, y) in the above formula is the coordinate position on the AR HUD, (Xc, yc, zc) is the position of the object in the human eye coordinate system, and f represents the focal length (focal plane root eyebox distance) of the AR HUD.
The transformation mode and transformation parameters of the whole process can be calibrated when each vehicle leaves the factory, a 5-step end-to-end transformation matrix X from a road live-action image coordinate system to an AR HUD coordinate system is generated, a human eye position correction matrix E, if the road live-action image coordinate containing AR information to be rendered is a matrix V, and the real-time position of the human eye is P, the coordinates of AR HUD displaying the AR information are as follows: o=v X P E.
305: AR information is displayed in the front view based on the display information in the AR HUD coordinate system.
For example, as shown in fig. 2, a turning icon a001 is displayed on the road live-action image a01 at a position of a pedestrian path P2' at the turn of the calibrated street lamp.
Fig. 10 is a schematic view of a possible functional framework of a vehicle 1 according to an embodiment of the present application. As shown in fig. 10, various subsystems may be included in the functional framework of the vehicle 1, such as a sensor system 10, a control system 20, one or more peripheral devices 16 (one shown in the illustration), a power supply 40, a computer system 50, and a heads-up display system 60, as shown. Alternatively, the vehicle 1 may also include other functional systems, such as an engine system for powering the vehicle 1, etc., the application is not limited herein. Wherein, the liquid crystal display device comprises a liquid crystal display device,
The sensor system 10 may include a number of sensing devices that sense the measured information and convert the sensed information to an electrical signal or other desired form of information output according to a certain law. As shown, these detection devices may include, but are not limited to, a global positioning system 11 (global positioning system, GPS), a vehicle speed sensor 12, an inertial measurement unit 13 (inertial measurement unit, IMU), a radar unit 14, a laser rangefinder 15, an imaging unit 16, a wheel speed sensor 17, a steering sensor 18, a gear sensor 19, or other elements for automatic detection, and so forth.
The global positioning system GPS11 is a system for performing positioning and navigation in real time on a global scale by using GPS positioning satellites. In the present application, the global positioning system GPS11 can be used to realize real-time positioning of the vehicle 1 and provide geographic position information of the vehicle 1. The vehicle speed sensor 12 detects a running vehicle speed of the vehicle 1. The inertial measurement unit 13 may include a combination of an accelerometer and a gyroscope, which is a device that measures the angular rate and acceleration of the vehicle 1. For example, during running of the vehicle 1, the inertia measurement unit may measure a position and an angle change of the vehicle body, etc., based on inertial acceleration of the vehicle 1.
The radar unit 14 may also be referred to as a radar system. The radar unit senses objects with wireless signals in the current environment in which the vehicle 1 is traveling. Optionally, the radar unit may also sense information such as the speed of travel and direction of travel of the object. In practical applications, the radar unit may be configured as one or more antennas for receiving or transmitting wireless signals. The laser rangefinder 15 may be an instrument that uses modulated laser light to achieve distance measurement of a target object, i.e., the laser rangefinder may be used to achieve distance measurement of a target object. In practical applications, the laser rangefinder may include, but is not limited to, any one or a combination of the following: a laser source, a laser scanner, and a laser detector.
The image capturing unit 16 is used to capture images such as images and videos. In the application, the image pickup device can collect images in the environment where the vehicle 1 is located in real time during the running process of the vehicle 1 or after the image pickup device is started. For example, the camera device may acquire corresponding images in real time and continuously during the entrance and exit of the vehicle 1 into and from the tunnel. In practical applications, the image capturing device includes, but is not limited to, a vehicle recorder, a camera or other elements for photographing/photography, and the number of the image capturing devices is not limited in the present application.
The wheel speed sensor 17 is a sensor for detecting the wheel speed of the vehicle 1. Common wheel speed sensors 17 may include, but are not limited to, magneto-electric wheel speed sensors and hall-type wheel speed sensors. The steering sensor 18, which may also be referred to as a steering angle sensor, may represent a system for detecting the steering angle of the vehicle 1. In practice, the steering sensor 18 may be used to measure the steering angle of the steering wheel of the vehicle 1 or to measure an electrical signal indicative of the steering angle of the steering wheel of the vehicle 1. Alternatively, the steering sensor 18 may be used to measure the steering angle of the tire of the vehicle 1, or to measure an electric signal indicating the steering angle of the tire of the vehicle 1, or the like, and the present application is not limited thereto.
That is, the steering sensor 18 may be used to measure any one or a combination of the following: steering angle of the steering wheel, electric signals indicating steering angle of the steering wheel, steering angle of the wheels (tires of the vehicle 1), electric signals indicating steering angle of the wheels, and the like.
A gear sensor 19 for detecting a current gear in which the vehicle 1 is traveling. The gear in the vehicle 1 may also be different due to the different factory of the vehicle 1. Taking the autonomous vehicle 1 as an example, the autonomous vehicle 1 supports 6 gears, respectively: p, R, N, D, 2, and L. Among them, P (park) gear is used for parking, which locks a braking portion of the vehicle 1 by a mechanical device of the vehicle 1 so that the vehicle 1 cannot move. R (reverse) gear, also known as reverse gear, is used for reversing the vehicle 1. D (drive) gear, also called forward gear, is used for the vehicle 1 to travel on the road. The 2 (second) gear is also a forward gear for adjusting the running speed of the vehicle 1. Gear 2 is typically available for use on both the upper and lower ramps of the vehicle 1. An L (low) range, also referred to as a low range, is used to define the travel speed of the vehicle 1. For example, on a downhill road, the vehicle 1 enters L gear, so that the vehicle 1 is braked by using engine power when going downhill, and the driver does not need to step on the brake for a long time to cause overheat of the brake pad, thereby causing danger.
The control system 20 may include several elements, such as a steering unit 21, a braking unit 22, a lighting system 23, an autopilot system 24, a map navigation system 25, a network timing system 26, and an obstacle avoidance system 27, as shown. Optionally, the control system 20 may further include elements such as a throttle controller and an engine controller for controlling the running speed of the vehicle 1, which is not limited by the present application.
The steering unit 21 may represent a system for adjusting the direction of travel of the vehicle 1, which may include, but is not limited to, a steering wheel, or any other structural device for adjusting or controlling the direction of travel of the vehicle 1. The brake unit 22 may represent a system for slowing the travel speed of the vehicle 1, which may also be referred to as a vehicle 1 brake system. Which may include, but is not limited to, a brake controller, a retarder or any other structural device for decelerating the vehicle 1, etc. In practice, the braking unit 22 may utilize friction to slow the vehicle 1 tires and thus the running speed of the vehicle 1. The lighting system 23 is used to provide a lighting function or a warning function for the vehicle 1. For example, during night driving of the vehicle 1, the lighting system 23 may activate the front and rear lights of the vehicle 1 to provide illumination for driving of the vehicle 1, ensuring safe driving of the vehicle 1. In practical applications, the lighting system includes, but is not limited to, a front light, a rear light, a width light, a warning light, and the like.
Autopilot system 24 may include hardware and software systems for processing and analyzing data input to autopilot system 14 to obtain actual control parameters of components in control system 20, such as desired brake pressure of a brake controller in a brake unit, desired torque of an engine, and the like. The control system 20 is convenient to realize corresponding control, and safe running of the vehicle 1 is ensured. Optionally, the autopilot system 14 may also determine information such as obstacles faced by the vehicle 1, characteristics of the environment in which the vehicle 1 is located (e.g., lanes in which the vehicle 1 is currently traveling, road boundaries, and upcoming traffic lights) by analyzing the data. The data input to the autopilot system 14 may be image data collected by a camera device, or data collected by various elements in the sensor system 10, such as steering wheel angle provided by a steering angle sensor, wheel speed provided by a wheel speed sensor, etc., which is not limited by the present application.
The map navigation system 25 is used to provide map information and navigation services for the vehicle 1. In practical applications, the map navigation system 25 may plan an optimal driving route, such as a route with the shortest distance or a route with a smaller traffic flow, according to the positioning information (specifically, the current position of the vehicle 1) of the vehicle 1 provided by the GPS and the destination address input by the user. The vehicle 1 is facilitated to navigate according to the optimum driving route to reach the destination address. Alternatively, the map navigation system may provide or display corresponding map information to the user according to the actual requirement of the user, such as displaying the road section on which the vehicle 1 is currently traveling on the map in real time, and the like, in addition to providing the navigation function, which is not limited by the present application.
The network time synchronization system 26 (network time system, NTS) is used to provide time synchronization services to ensure that the current time of the system of the vehicle 1 is synchronized with the network standard time, which is advantageous for providing more accurate time information to the vehicle 1. In particular, the network time synchronization system 26 may obtain a standard time signal from a GPS satellite, and use the time signal to synchronously update the current time of the system of the vehicle 1, so as to ensure that the current time of the system of the vehicle 1 is consistent with the time of the obtained standard time signal.
The obstacle avoidance system 27 is used to predict obstacles that may be encountered during the running of the vehicle 1, and thereby control the vehicle 1 to bypass or cross the obstacles to achieve normal running of the vehicle 1. For example, the obstacle avoidance system 27 may utilize sensor data collected by various elements of the sensor system 10 to determine possible obstacles on the path of the vehicle 1. If the obstacle is large in size, such as a stationary building (building) at the roadside, the obstacle avoidance system 27 may control the vehicle 10 to bypass the obstacle for safe travel. Conversely, if the obstacle is small in size, such as small Dan Toudeng on the road, the obstacle avoidance system 27 may control the vehicle 1 to travel forward past the obstacle, and so on.
Peripheral device 16 may include several elements such as a communication system 31, a touch screen 32, a user interface 33, a microphone 34, and a speaker 35, among others, as shown. Wherein the communication system 31 is for enabling network communication between the vehicle 1 and other devices than the vehicle 1. In practical applications, the communication system 31 may implement network communication between the vehicle 1 and other devices using wireless communication technology or wired communication technology. The wired communication technology may refer to communication between the vehicle 1 and other devices by means of a network cable or an optical fiber, or the like. The wireless communication technologies include, but are not limited to, global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packetradio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-divisioncode division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), wireless local area networks (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) networks), bluetooth (blue, BT), global navigation satellite system (global navigation satellitesystem, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technologies (near fieldcommunication, NFC), and infrared technologies (IR), among others.
The touch screen 32 may be used to detect operating instructions on the touch screen 32. For example, the user performs a touch operation on the content data displayed on the touch screen 32 according to the actual requirement, so as to implement a function corresponding to the touch operation, for example, playing multimedia files such as music, video, and the like. The user interface 33 may be a touch panel, for detecting operation instructions on the touch panel. The user interface 33 may also be a physical key or a mouse. The user interface 34 may also be a display screen for outputting data, displaying images or data. Optionally, the user interface 34 may also be at least one device belonging to the category of peripheral devices, such as a touch screen, a microphone, a speaker, etc.
A microphone 34, also called microphone, is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user sounds near the microphone, and a sound signal may be input into the microphone. The speaker 35, also called a horn, is used to convert an audio electrical signal into a sound signal. The vehicle 1 can listen to music, or listen to hands-free conversation, etc., through the speaker 35.
The power source 40 represents a system that provides power or energy to the vehicle 1, which may include, but is not limited to, a rechargeable lithium battery or lead acid battery, or the like. In practical applications, one or more battery packs in the power supply are used to provide electric energy or power for starting the vehicle 1, and the kind and materials of the power supply are not limited by the present application. Alternatively, the power source 40 may be an energy source for providing a source of energy to the vehicle 1, such as gasoline, diesel, ethanol, solar cells or panels, etc., and the application is not limited.
Several functions of the vehicle 1 are controlled by the computer system 50. Computer system 50 may include one or more processors 51 (illustrated as one processor) and memory 52 (which may also be referred to as storage). In practical applications, the memory 52 may be internal to the computer system 50, or may be external to the computer system 50, for example, as a cache in the vehicle 1, and the application is not limited thereto. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the processor 51 may include one or more general-purpose processors, such as a graphics processor (graphicprocessing unit, GPU). The processor 51 may be configured to execute the relevant programs or instructions corresponding to the programs stored in the memory 52 to implement the corresponding functions of the vehicle 1.
Memory 52 may include volatile memory (RAM), such as; the memory may also include a non-volatile memory (non-volatile memory), such as ROM, flash memory (flash memory), HDD, or solid state disk SSD; memory 52 may also include a combination of the types of memory described above. The memory 52 may be used to store a set of program codes or instructions corresponding to the program codes so that the processor 51 invokes the program codes or instructions stored in the memory 52 to implement the corresponding functions of the vehicle 1. Including but not limited to some or all of the functions in the functional framework diagram of the vehicle 1 shown in fig. 10. In the present application, the memory 52 may store a set of program codes for controlling the vehicle 1, and the processor 51 may call the program codes to control the safe driving of the vehicle 1, and how the safe driving of the vehicle 1 is achieved will be described in detail below.
Alternatively, the memory 52 may store information such as road maps, driving routes, sensor data, and the like, in addition to program codes or instructions. The computer system 50 may implement the relevant functions of the vehicle 1 in combination with other elements in the functional framework schematic of the vehicle 1, such as sensors in a sensor system, GPS, etc. For example, the computer system 50 may control the traveling direction or traveling speed of the vehicle 1, etc., based on the data input of the sensor system 10, and the present application is not limited thereto.
The heads-up display system 60 may include several elements, such as a windshield 61, a controller 62, and a heads-up display 63 as shown. The controller 222 is configured to generate an image according to a user instruction and transmit the image to the head-up display 200; the heads-up display 200 may include an image generation unit, a pluggable lens assembly, and a mirror assembly, with a front bezel for cooperating with the heads-up display to implement an optical path of the heads-up display system to present a target image in front of the driver. It should be noted that, the functions of some elements in the head-up display system may be implemented by other subsystems of the vehicle 1, for example, the controller 62 may also be an element in the control system.
In this regard, FIG. 10 of the present application is shown to include four subsystems, sensor system 10, control system 20, computer system 50, and heads-up display system 60, by way of example only, and not by way of limitation. In practical applications, the vehicle 1 may combine several elements in the vehicle 1 according to different functions, thereby obtaining subsystems of corresponding different functions. For example, an electronic stability system (electronic stability program, ESP), an electric power steering system (electric powersteering, EPS), and the like, which are not shown, may also be included in the vehicle 1. The ESP system may consist of part of the sensors in the sensor system 10 and part of the elements in the control system 20, in particular the ESP system may comprise a wheel speed sensor 17, a steering sensor 18, a lateral acceleration sensor and a control unit involved in the control system 20, etc. The EPS system may be composed of some of the sensors in the sensor system 10, some of the elements in the control system 20, and the power source 40, and specifically the EPS system may include the steering sensor 18, the generator and decelerator involved in the control system 20, the battery power source, and so forth. For another example, the head-up display system may also include a user interface 33 and a touch screen 32 in a peripheral device to implement a function of receiving a user instruction, and may further include a camera unit in the sensor system for generating an image in cooperation with the controller 1203, for example, the camera unit sends the image to the controller 1203.
It should be noted that fig. 10 is only a schematic view of one possible functional framework of the vehicle 1. In practice, the vehicle 1 may include more or fewer systems or elements, and the application is not limited thereto.
The vehicle 1 may be a car, a truck, a motorcycle, a bus, a ship, an airplane, a helicopter, a mower, an amusement ride, a casino vehicle 1, construction equipment, an electric car, a golf car, a train, a trolley, or the like, and the embodiment of the present application is not particularly limited.
Embodiments of the disclosed mechanisms may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module mentioned in each device is a logic unit/module, and in physical terms, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is only a key for solving the technical problem posed by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems posed by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application.

Claims (17)

1. The AR information display method under the real-time mobile live-action is applied to electronic equipment and is characterized by comprising the following steps:
acquiring a first road live-action image of the electronic equipment at a first position at a first moment in a moving process;
determining a predicted display position of AR information corresponding to a first target object based on the first road live-action image;
determining calibration information of the predicted display position based on a movement speed of the electronic device during the movement of the electronic device from the first position to the second position;
acquiring a calibrated display position based on the calibration information;
the AR information is displayed in a front view based on the calibrated display position when the electronic device is at a second position.
2. The method of claim 1, wherein when the electronic device is at the second location, prior to displaying the AR information in the front view based on the calibrated display location, the method further comprises: and converting the calibrated display information from the image coordinate system into a human eye coordinate system and then into an augmented reality head-up display coordinate system.
3. The method of claim 1, wherein determining the predicted display position of the AR information corresponding to the first object based on the first road live-action image comprises:
Determining spatial feature information based on the first road live-action image;
and determining the prediction display information of the AR information corresponding to the first target object based on the spatial feature information.
4. A method according to claim 3, wherein said determining spatial feature information based on the first road live-action image comprises:
and determining the spatial characteristic information based on a synchronous positioning and mapping technology and the first road live-action image.
5. The method of claim 3 or 4, wherein the determining spatial feature information based on the first road live-action image comprises:
if a curved or up-down slope road exists in the first road live-action image, converting an image shot by a camera angle of the first road live-action image into an image under a world coordinate system.
6. The method according to claim 1, wherein the AR information comprises driving assistance information and/or navigation guidance information.
7. A vehicle comprising a heads-up display and an AR information display system;
the AR information display system is used for acquiring a first road live-action image of the vehicle at a first position at a first moment in the moving process;
The AR information display system is used for determining the predicted display position of AR information corresponding to a first target object based on the first road live-action image;
the AR information display system is used for determining calibration information of the predicted display position based on the moving speed of the vehicle in the process of moving the vehicle from a first position to a second position;
the AR information display system is used for acquiring a calibrated display position based on the calibration information;
the AR information display system is used for controlling the head-up display to display the AR information based on the calibrated display position when the vehicle is at the second position.
8. The vehicle of claim 7, wherein the AR information display system is configured to convert the calibrated display information from an image coordinate system to a human eye coordinate system and then to an augmented reality head-up display coordinate system before displaying the AR information in the forward view based on the calibrated display position.
9. The vehicle of claim 7, wherein the AR information display system is configured to determine a predicted display position of AR information corresponding to a first target object based on the first road live-action image, comprising:
The AR information display system is used for determining spatial feature information based on the first road live-action image;
the AR information display system is used for determining prediction display information of AR information corresponding to the first target object based on the spatial feature information.
10. The vehicle of claim 9, wherein the AR information display system determines spatial feature information based on the first road live-action image, comprising:
the AR display system is used for determining the spatial feature information based on synchronous positioning and mapping technology and the first road live-action image.
11. The vehicle according to claim 9 or 10, characterized in that the AR information display system is configured to determine spatial feature information based on the first road live-action image, comprising:
the AR information display system is used for converting an image shot by a camera angle of the first road live-action image into an image under a world coordinate system under the condition that the first road live-action image has a curve or an ascending and descending slope.
12. The vehicle of claim 7, characterized in that the AR information comprises driving assistance information and/or navigation guidance information.
13. An apparatus, the apparatus comprising:
one or more memories storing instructions;
a processor coupled to the one or more memories, the instructions, when executed by the processor, cause the apparatus to perform the real-time mobile live-action AR information display method of any of claims 1 to 6.
14. The device of claim 13, wherein the device is a vehicle, a cell phone, a watch, or AR glasses.
15. A computer-readable storage medium, wherein instructions stored thereon, which when executed on an electronic device, cause the electronic device to perform the real-time mobile live-action AR information display method of any one of claims 1 to 6.
16. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the real-time mobile live-action AR information display method of any one of claims 1 to 6.
17. A chip, wherein the chip is coupled to a memory for reading and executing program instructions stored in the memory to implement the real-time mobile live-action AR information display method according to any one of claims 1 to 6.
CN202210130293.XA 2022-02-11 2022-02-11 AR information display method, vehicle and device under real-time moving live-action Pending CN116630577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210130293.XA CN116630577A (en) 2022-02-11 2022-02-11 AR information display method, vehicle and device under real-time moving live-action

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210130293.XA CN116630577A (en) 2022-02-11 2022-02-11 AR information display method, vehicle and device under real-time moving live-action

Publications (1)

Publication Number Publication Date
CN116630577A true CN116630577A (en) 2023-08-22

Family

ID=87615793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210130293.XA Pending CN116630577A (en) 2022-02-11 2022-02-11 AR information display method, vehicle and device under real-time moving live-action

Country Status (1)

Country Link
CN (1) CN116630577A (en)

Similar Documents

Publication Publication Date Title
US11255974B2 (en) Method of determining position of vehicle and vehicle using the same
KR102581359B1 (en) User interface apparatus for vehicle and Vehicle
EP3832422B1 (en) Electronic device and vehicle control method of electronic device, server and method for providing precise map data of server
US11535155B2 (en) Superimposed-image display device and computer program
CN111480130B (en) Method for solar-sensing vehicle route selection, vehicle and computing system
US10384679B2 (en) Travel control method and travel control apparatus
KR102534792B1 (en) Sparse map for autonomous vehicle navigation
US9948853B2 (en) Camera parameter calculation device, navigation system and camera parameter calculation method
US7039521B2 (en) Method and device for displaying driving instructions, especially in car navigation systems
US9558584B1 (en) 3D position estimation of objects from a monocular camera using a set of known 3D points on an underlying surface
US9267808B2 (en) Visual guidance system
JP7052786B2 (en) Display control device and display control program
EP3770549B1 (en) Information processing device, movement device, method, and program
CN108362295A (en) Vehicle route guides device and method
JP2019508677A (en) Control of vehicle components using maps
KR101979276B1 (en) User interface apparatus for vehicle and Vehicle
JP2005098853A (en) Map data updating method and map data updating apparatus
WO2014128532A1 (en) Intelligent video navigation for automobiles
JP6626069B2 (en) Display device for vehicles
KR101532653B1 (en) Virtual Lane Generating System and the Method
CN115205365A (en) Vehicle distance detection method and device, vehicle, readable storage medium and chip
CN115100377A (en) Map construction method and device, vehicle, readable storage medium and chip
WO2004048895A1 (en) Moving body navigate information display method and moving body navigate information display device
KR101647626B1 (en) Bus Operation Guiding System
JP2020053950A (en) Vehicle stereo camera device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination