CN110099273B - Augmented reality content display method and device - Google Patents

Augmented reality content display method and device Download PDF

Info

Publication number
CN110099273B
CN110099273B CN201910326910.1A CN201910326910A CN110099273B CN 110099273 B CN110099273 B CN 110099273B CN 201910326910 A CN201910326910 A CN 201910326910A CN 110099273 B CN110099273 B CN 110099273B
Authority
CN
China
Prior art keywords
optical axis
augmented reality
reality content
deviation angle
projection optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910326910.1A
Other languages
Chinese (zh)
Other versions
CN110099273A (en
Inventor
罗志平
周志鹏
张丙林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910326910.1A priority Critical patent/CN110099273B/en
Publication of CN110099273A publication Critical patent/CN110099273A/en
Application granted granted Critical
Publication of CN110099273B publication Critical patent/CN110099273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides an augmented reality content display method and device, wherein the method comprises the following steps: determining augmented reality content to be projected, and a deviation angle between a projection optical axis and a driver visual angle; augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents; inquiring an optical axis offset parameter group and a fusion parameter group according to the deviation angle, and acquiring an optical axis offset parameter and a fusion parameter corresponding to the deviation angle; adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter corresponding to the deviation angle and the fusion parameter to obtain the fused augmented reality content and the adjusted projection optical axis; and projecting the fused augmented reality content to each object in the real scene by adopting the adjusted projection optical axis, so that the matching accuracy of the augmented reality content and the real scene object is improved.

Description

Augmented reality content display method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a method and a device for displaying augmented reality contents.
Background
The conventional Augmented Reality Head-Up Display (AR-HUD) projects Augmented Reality contents to the front of the vehicle by the same distance. This projection mode is greatly affected by the deviation angle between the driver's view angle and the projection optical axis. If the deviation angle is too large, the augmented reality content and the real-world object may be deviated too much until a mismatch occurs, for example, a recognized pedestrian mark is matched with a vehicle, which may cause a safety hazard.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide an augmented reality content display method, which is used to solve the problem of low matching accuracy between augmented reality content and a real object in the prior art.
A second object of the present invention is to provide an augmented reality content display apparatus.
A third object of the present invention is to propose another augmented reality content display apparatus.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
A fifth object of the invention is to propose a computer program product.
To achieve the above object, an embodiment of a first aspect of the present invention provides an augmented reality content display method, including:
determining augmented reality content to be projected and a deviation angle between a projection optical axis and a visual angle of a driver; the augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents;
inquiring an optical axis offset parameter group and a fusion parameter group according to the deviation angle, and acquiring an optical axis offset parameter and a fusion parameter corresponding to the deviation angle;
adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter and the fusion parameter corresponding to the deviation angle to obtain the fused augmented reality content and the adjusted projection optical axis;
and projecting the fused augmented reality content to each object in the real scene by adopting the adjusted projection optical axis to realize the display of the augmented reality content.
Further, the combining the optical axis offset parameter and the fusion parameter corresponding to the deviation angle to adjust the augmented reality content and the projection optical axis to obtain the fused augmented reality content and the adjusted projection optical axis includes:
adjusting the projection position of the augmented reality content by combining the augmented reality content, the fusion parameter corresponding to the deviation angle and a preset fusion function to obtain the fused augmented reality content;
and determining the adjusted projection optical axis by combining the projection optical axis, the optical axis offset parameter corresponding to the deviation angle and a preset optical axis offset function.
Further, before querying the optical axis offset parameter set and the fusion parameter set according to the deviation angle and acquiring the optical axis offset parameter and the fusion parameter corresponding to the deviation angle, the method further includes:
acquiring training data, wherein the training data comprises: training samples corresponding to all the live-action scenes; the training sample includes: projecting augmented reality contents under an optical axis and the augmented reality contents under various deviation angles;
and training a neural network model by adopting the training data to obtain an optical axis offset parameter set and a fusion parameter set.
Further, the training of the neural network model by using the training data to obtain the optical axis offset parameter set and the fusion parameter set includes:
inputting the training samples into a neural network model aiming at each training sample to obtain an output optical axis offset parameter set and a fusion parameter set;
calculating to obtain augmented reality contents under each deviation angle by combining the augmented reality contents under the projection optical axis in the training sample, the output optical axis offset parameter group and the output fusion parameter group;
and comparing the augmented reality content under each deviation angle obtained by calculation with the augmented reality content under each deviation angle in the training sample, and adjusting the coefficient of the neural network model according to the comparison result until the augmented reality content under each deviation angle obtained by calculation is consistent with the augmented reality content under each deviation angle in the training sample.
Further, the augmented reality content to be projected is determined in such a way that,
acquiring a live-action picture corresponding to a live-action;
identifying the live-action picture, and acquiring each object and corresponding position in the live-action picture;
and acquiring corresponding enhanced content for each object in the real scene, and determining the projection position of the corresponding enhanced content according to the position of the object.
Further, the deviation angle between the projection optical axis and the driver's angle of view is determined in such a manner that,
detecting eyes of a driver to acquire a visual angle of the driver;
acquiring the angle of a projection optical axis;
and determining a deviation angle between the projection optical axis and the driver visual angle according to the angle of the projection optical axis and the driver visual angle.
According to the augmented reality content display method, augmented reality content to be projected and a deviation angle between a projection optical axis and a visual angle of a driver are determined; augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents; inquiring an optical axis offset parameter group and a fusion parameter group according to the deviation angle, and acquiring an optical axis offset parameter and a fusion parameter corresponding to the deviation angle; adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter corresponding to the deviation angle and the fusion parameter to obtain the fused augmented reality content and the adjusted projection optical axis; the adjusted projection optical axis is adopted to project the fused augmented reality contents to each object in the real scene, so that the augmented reality contents are displayed, the projection positions of the augmented reality contents and the projection optical axis can be adjusted according to the deviation angle between the projection optical axis and the visual angle of a driver, and the matching accuracy of the augmented reality contents and the real scene objects is improved.
To achieve the above object, a second embodiment of the present invention provides an augmented reality content display device, including:
the determining module is used for determining augmented reality content to be projected and a deviation angle between a projection optical axis and a visual angle of a driver; the augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents;
the query module is used for querying the optical axis offset parameter group and the fusion parameter group according to the deviation angle, and acquiring the optical axis offset parameter and the fusion parameter corresponding to the deviation angle;
the adjusting module is used for adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter and the fusion parameter corresponding to the deviation angle to obtain fused augmented reality content and an adjusted projection optical axis;
and the projection module is used for projecting the fused augmented reality contents to each object in the real scene by adopting the adjusted projection optical axis so as to realize the display of the augmented reality contents.
Further, the adjusting module is specifically configured to,
adjusting the projection position of the augmented reality content by combining the augmented reality content, the fusion parameter corresponding to the deviation angle and a preset fusion function to obtain the fused augmented reality content;
and determining the adjusted projection optical axis by combining the projection optical axis, the optical axis offset parameter corresponding to the deviation angle and a preset optical axis offset function.
Further, the device also comprises: an acquisition module and a training module;
the obtaining module is configured to obtain training data, where the training data includes: training samples corresponding to all the live-action scenes; the training sample includes: projecting augmented reality contents under an optical axis and the augmented reality contents under various deviation angles;
and the training module is used for training a neural network model by adopting the training data to obtain an optical axis offset parameter set and a fusion parameter set.
Further, the training module is specifically configured to,
inputting the training samples into a neural network model aiming at each training sample to obtain an output optical axis offset parameter set and a fusion parameter set;
calculating to obtain augmented reality contents under each deviation angle by combining the augmented reality contents under the projection optical axis in the training sample, the output optical axis offset parameter group and the output fusion parameter group;
and comparing the augmented reality content under each deviation angle obtained by calculation with the augmented reality content under each deviation angle in the training sample, and adjusting the coefficient of the neural network model according to the comparison result until the augmented reality content under each deviation angle obtained by calculation is consistent with the augmented reality content under each deviation angle in the training sample.
Further, the augmented reality content to be projected is determined in such a way that,
acquiring a live-action picture corresponding to a live-action;
identifying the live-action picture, and acquiring each object and corresponding position in the live-action picture;
and acquiring corresponding enhanced content for each object in the real scene, and determining the projection position of the corresponding enhanced content according to the position of the object.
Further, the deviation angle between the projection optical axis and the driver's angle of view is determined in such a manner that,
detecting eyes of a driver to acquire a visual angle of the driver;
acquiring the angle of a projection optical axis;
and determining a deviation angle between the projection optical axis and the driver visual angle according to the angle of the projection optical axis and the driver visual angle.
The augmented reality content display device of the embodiment of the invention determines augmented reality content to be projected and a deviation angle between a projection optical axis and a driver visual angle; augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents; inquiring an optical axis offset parameter group and a fusion parameter group according to the deviation angle, and acquiring an optical axis offset parameter and a fusion parameter corresponding to the deviation angle; adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter corresponding to the deviation angle and the fusion parameter to obtain the fused augmented reality content and the adjusted projection optical axis; the adjusted projection optical axis is adopted to project the fused augmented reality contents to each object in the real scene, so that the augmented reality contents are displayed, the projection positions of the augmented reality contents and the projection optical axis can be adjusted according to the deviation angle between the projection optical axis and the visual angle of a driver, and the matching accuracy of the augmented reality contents and the real scene objects is improved.
To achieve the above object, a third embodiment of the present invention provides another augmented reality content display apparatus, including: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the augmented reality content display method as described above when executing the program.
In order to achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the augmented reality content display method as described above.
In order to achieve the above object, a fifth embodiment of the present invention provides a computer program product, which when executed by an instruction processor in the computer program product, implements the augmented reality content display method as described above.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a method for displaying augmented reality content according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an augmented reality content display apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of another augmented reality content display apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another augmented reality content display device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The augmented reality content display method and apparatus according to the embodiment of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for displaying augmented reality content according to an embodiment of the present invention. As shown in fig. 1, the augmented reality content display method includes the steps of:
s101, determining augmented reality content to be projected and a deviation angle between a projection optical axis and a visual angle of a driver; augmented reality content includes: the enhanced content corresponding to each object in the real scene and the projection position of the enhanced content.
The execution main body of the augmented reality content display method is an augmented reality content display device, and the augmented reality content display device can be an augmented reality head-up display AR-HUD, or software installed in the augmented reality head-up display, or a background server corresponding to the augmented reality head-up display.
In this embodiment, assuming that the real scene includes a pedestrian, a vehicle ahead, and a road, the enhancement content corresponding to the pedestrian may be, for example, a distance between the vehicle and the pedestrian; the enhancement content corresponding to the vehicle may be, for example, a distance between the host vehicle and the preceding vehicle, a vehicle speed of the preceding vehicle, or the like; the enhanced content for road correspondence may be, for example, a road that is displayed in a highlighted manner when the road is a road in the navigation route of the host vehicle, a turning direction indicated by an arrow, or the like. In this embodiment, the projection position of the enhancement content is generally near or on the corresponding object, for example, the projection position of the enhancement content corresponding to a pedestrian is near the pedestrian; the projection position of the enhanced content corresponding to the road is on the road. Correspondingly, in this embodiment, the determination method of the augmented reality content to be projected may specifically be to obtain a live-action picture corresponding to a live-action; identifying the live-action picture, and acquiring each object and corresponding position in the live-action picture; and acquiring corresponding enhanced content for each object in the real scene, and determining the projection position of the corresponding enhanced content according to the position of the object.
In this embodiment, the projection optical axis specifically refers to a projection optical axis of an augmented reality head-up display. The augmented reality content to be projected refers to the augmented reality content to be projected under the projection optical axis of the augmented reality head-up display. The determination mode of the deviation angle between the projection optical axis and the driver visual angle may specifically be to detect eyes of the driver and acquire the driver visual angle; acquiring the angle of a projection optical axis; and determining the deviation angle between the projection optical axis and the visual angle of the driver according to the angle of the projection optical axis and the visual angle of the driver.
S102, inquiring the optical axis offset parameter group and the fusion parameter group according to the deviation angle, and acquiring the optical axis offset parameter and the fusion parameter corresponding to the deviation angle.
In this embodiment, the optical axis offset parameter set includes: and optical axis deviation parameters corresponding to different deviation angles. The optical axis offset parameter may specifically be an offset value of the optical axis in the X axis and the Y axis, and the offset projection optical axis may be determined by combining the actual coordinate of the projection optical axis, the optical axis offset parameter, and the optical axis offset function. The fusion parameter set comprises: and fusion parameters corresponding to different deviation angles. The fusion parameter may specifically be a transfer matrix between the augmented reality content under the projection optical axis and the augmented reality content under different deviation angles, and the augmented reality content under different deviation angles may be determined by combining the augmented reality content, the fusion parameter, and the fusion function under the projection optical axis.
In this embodiment, before step 102, a process of acquiring the optical axis offset parameter set and the fusion parameter set may be further included. The process of obtaining the optical axis offset parameter set and the fusion parameter set may specifically be to obtain training data, where the training data includes: training samples corresponding to all the live-action scenes; the training samples include: projecting augmented reality contents under an optical axis and the augmented reality contents under various deviation angles; and training the neural network model by adopting the training data to obtain an optical axis offset parameter set and a fusion parameter set.
Specifically, the process of training the neural network model by using the training data may be that, for each training sample, the training sample is input into the neural network model to obtain an output optical axis offset parameter set and a fusion parameter set; calculating to obtain augmented reality contents under each deviation angle by combining the augmented reality contents under the projection optical axis in the training sample, the output optical axis offset parameter group and the output fusion parameter group; and comparing the augmented reality content under each deviation angle obtained by calculation with the augmented reality content under each deviation angle in the training sample, and adjusting the coefficient of the neural network model according to the comparison result until the augmented reality content under each deviation angle obtained by calculation is consistent with the augmented reality content under each deviation angle in the training sample.
S103, adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter corresponding to the deviation angle and the fusion parameter to obtain the fused augmented reality content and the adjusted projection optical axis.
In this embodiment, the process of the augmented reality content display device executing step 103 may specifically be that the projection position of the augmented reality content is adjusted by combining the augmented reality content, the fusion parameter corresponding to the deviation angle, and the preset fusion function, so as to obtain the fused augmented reality content; and determining the adjusted projection optical axis by combining the projection optical axis, the optical axis offset parameter corresponding to the deviation angle and a preset optical axis offset function.
In this embodiment, the fusion function may be overlay (I _ 0.,. I _ n), for example, where I _ n represents a fusion parameter corresponding to the nth deviation angle. After the fusion parameter corresponding to a certain deviation angle is selected, the fusion parameters corresponding to other deviation angles can be set to be 0, and then the augmented reality content is calculated by adopting the fusion function, so that the fused augmented reality content is obtained.
In this embodiment, the optical axis offset function may be shift (X, Y), for example, where after an optical axis offset parameter corresponding to a certain deviation angle is selected, an offset value of an X axis in the optical axis offset parameter is taken as X, an offset value of a Y axis in the optical axis offset parameter is taken as Y, and then the optical axis offset function is used to calculate the projection optical axis, so as to obtain the adjusted projection optical axis.
And S104, projecting the fused augmented reality contents to each object in the real scene by adopting the adjusted projection optical axis, so as to realize the display of the augmented reality contents.
In this embodiment, the adjusted projection optical axis is used to project the fused augmented reality content, so that the augmented reality content seen by the user at the view angle of the driver can be matched with the real scene object, the deviation between the augmented reality content and the real scene object is ensured to be small, the occurrence of mismatching is avoided, and the driving safety is ensured.
According to the augmented reality content display method, augmented reality content to be projected and a deviation angle between a projection optical axis and a visual angle of a driver are determined; augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents; inquiring an optical axis offset parameter group and a fusion parameter group according to the deviation angle, and acquiring an optical axis offset parameter and a fusion parameter corresponding to the deviation angle; adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter corresponding to the deviation angle and the fusion parameter to obtain the fused augmented reality content and the adjusted projection optical axis; the adjusted projection optical axis is adopted to project the fused augmented reality contents to each object in the real scene, so that the augmented reality contents are displayed, the projection positions of the augmented reality contents and the projection optical axis can be adjusted according to the deviation angle between the projection optical axis and the visual angle of a driver, and the matching accuracy of the augmented reality contents and the real scene objects is improved.
Fig. 2 is a schematic structural diagram of an augmented reality content display device according to an embodiment of the present invention. As shown in fig. 2, includes: a determination module 21, a query module 22, an adjustment module 23 and a projection module 24.
The determining module 21 is configured to determine augmented reality content to be projected and a deviation angle between a projection optical axis and a driver viewing angle; the augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents;
the query module 22 is configured to query the optical axis offset parameter set and the fusion parameter set according to the deviation angle, and obtain an optical axis offset parameter and a fusion parameter corresponding to the deviation angle;
an adjusting module 23, configured to adjust the augmented reality content and the projection optical axis in combination with the optical axis offset parameter and the fusion parameter corresponding to the deviation angle, so as to obtain a fused augmented reality content and an adjusted projection optical axis;
and the projection module 24 is configured to project the fused augmented reality content to each object in the real scene by using the adjusted projection optical axis, so as to display the augmented reality content.
The augmented reality content display device provided by the invention can be an augmented reality head-up display AR-HUD, or software installed in the augmented reality head-up display, or a background server corresponding to the augmented reality head-up display, and the like.
In this embodiment, assuming that the real scene includes a pedestrian, a vehicle ahead, and a road, the enhancement content corresponding to the pedestrian may be, for example, a distance between the vehicle and the pedestrian; the enhancement content corresponding to the vehicle may be, for example, a distance between the host vehicle and the preceding vehicle, a vehicle speed of the preceding vehicle, or the like; the enhanced content for road correspondence may be, for example, a road that is displayed in a highlighted manner when the road is a road in the navigation route of the host vehicle, a turning direction indicated by an arrow, or the like. In this embodiment, the projection position of the enhancement content is generally near or on the corresponding object, for example, the projection position of the enhancement content corresponding to a pedestrian is near the pedestrian; the projection position of the enhanced content corresponding to the road is on the road. Correspondingly, in this embodiment, the determination method of the augmented reality content to be projected may specifically be to obtain a live-action picture corresponding to a live-action; identifying the live-action picture, and acquiring each object and corresponding position in the live-action picture; and acquiring corresponding enhanced content for each object in the real scene, and determining the projection position of the corresponding enhanced content according to the position of the object.
In this embodiment, the projection optical axis specifically refers to a projection optical axis of an augmented reality head-up display. The augmented reality content to be projected refers to the augmented reality content to be projected under the projection optical axis of the augmented reality head-up display. The determination mode of the deviation angle between the projection optical axis and the driver visual angle may specifically be to detect eyes of the driver and acquire the driver visual angle; acquiring the angle of a projection optical axis; and determining the deviation angle between the projection optical axis and the visual angle of the driver according to the angle of the projection optical axis and the visual angle of the driver.
In this embodiment, the optical axis offset parameter set includes: and optical axis deviation parameters corresponding to different deviation angles. The optical axis offset parameter may specifically be an offset value of the optical axis in the X axis and the Y axis, and the offset projection optical axis may be determined by combining the actual coordinate of the projection optical axis, the optical axis offset parameter, and the optical axis offset function. The fusion parameter set comprises: and fusion parameters corresponding to different deviation angles. The fusion parameter may specifically be a transfer matrix between the augmented reality content under the projection optical axis and the augmented reality content under different deviation angles, and the augmented reality content under different deviation angles may be determined by combining the augmented reality content, the fusion parameter, and the fusion function under the projection optical axis.
Further, the adjusting module 23 is specifically configured to adjust the projection position of the augmented reality content in combination with the augmented reality content, the fusion parameter corresponding to the deviation angle, and the preset fusion function, so as to obtain the fused augmented reality content; and determining the adjusted projection optical axis by combining the projection optical axis, the optical axis offset parameter corresponding to the deviation angle and a preset optical axis offset function.
In this embodiment, the fusion function may be overlay (I _ 0.,. I _ n), for example, where I _ n represents a fusion parameter corresponding to the nth deviation angle. After the fusion parameter corresponding to a certain deviation angle is selected, the fusion parameters corresponding to other deviation angles can be set to be 0, and then the augmented reality content is calculated by adopting the fusion function, so that the fused augmented reality content is obtained.
In this embodiment, the optical axis offset function may be shift (X, Y), for example, where after an optical axis offset parameter corresponding to a certain deviation angle is selected, an offset value of an X axis in the optical axis offset parameter is taken as X, an offset value of a Y axis in the optical axis offset parameter is taken as Y, and then the optical axis offset function is used to calculate the projection optical axis, so as to obtain the adjusted projection optical axis.
Further, with reference to fig. 3, on the basis of the embodiment shown in fig. 2, the apparatus may further include: an acquisition module 25 and a training module 26;
the obtaining module 25 is configured to obtain training data, where the training data includes: training samples corresponding to all the live-action scenes; the training sample includes: projecting augmented reality contents under an optical axis and the augmented reality contents under various deviation angles;
the training module 26 is configured to train a neural network model by using the training data to obtain an optical axis offset parameter set and a fusion parameter set.
The training module 26 may be specifically configured to, for each training sample, input the training sample into the neural network model to obtain an output optical axis offset parameter set and an output fusion parameter set; calculating to obtain augmented reality contents under each deviation angle by combining the augmented reality contents under the projection optical axis in the training sample, the output optical axis offset parameter group and the output fusion parameter group; and comparing the augmented reality content under each deviation angle obtained by calculation with the augmented reality content under each deviation angle in the training sample, and adjusting the coefficient of the neural network model according to the comparison result until the augmented reality content under each deviation angle obtained by calculation is consistent with the augmented reality content under each deviation angle in the training sample.
The augmented reality content display device of the embodiment of the invention determines augmented reality content to be projected and a deviation angle between a projection optical axis and a driver visual angle; augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents; inquiring an optical axis offset parameter group and a fusion parameter group according to the deviation angle, and acquiring an optical axis offset parameter and a fusion parameter corresponding to the deviation angle; adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter corresponding to the deviation angle and the fusion parameter to obtain the fused augmented reality content and the adjusted projection optical axis; the adjusted projection optical axis is adopted to project the fused augmented reality contents to each object in the real scene, so that the augmented reality contents are displayed, the projection positions of the augmented reality contents and the projection optical axis can be adjusted according to the deviation angle between the projection optical axis and the visual angle of a driver, and the matching accuracy of the augmented reality contents and the real scene objects is improved.
Fig. 4 is a schematic structural diagram of another augmented reality content display device according to an embodiment of the present invention. The augmented reality content display device includes:
memory 1001, processor 1002, and computer programs stored on memory 1001 and executable on processor 1002.
The processor 1002, when executing the program, implements the augmented reality content display method provided in the above-described embodiment.
Further, the augmented reality content display apparatus further includes:
a communication interface 1003 for communicating between the memory 1001 and the processor 1002.
A memory 1001 for storing computer programs that may be run on the processor 1002.
Memory 1001 may include high-speed RAM memory and may also include non-volatile memory (e.g., at least one disk memory).
The processor 1002 is configured to implement the augmented reality content display method according to the foregoing embodiment when executing the program.
If the memory 1001, the processor 1002, and the communication interface 1003 are implemented independently, the communication interface 1003, the memory 1001, and the processor 1002 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 1001, the processor 1002, and the communication interface 1003 are integrated on one chip, the memory 1001, the processor 1002, and the communication interface 1003 may complete communication with each other through an internal interface.
The processor 1002 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present invention.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements an augmented reality content display method as described above.
The present invention also provides a computer program product, which when executed by an instruction processor in the computer program product, implements the augmented reality content display method as described above.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (14)

1. An augmented reality content display method, comprising:
determining augmented reality content to be projected and a deviation angle between a projection optical axis and a visual angle of a driver; the augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents;
inquiring an optical axis offset parameter group and a fusion parameter group according to the deviation angle, and acquiring an optical axis offset parameter and a fusion parameter corresponding to the deviation angle; the optical axis offset parameter group and the fusion parameter group are output of a neural network model, and the neural network model is obtained by training samples corresponding to all scenes;
adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter and the fusion parameter corresponding to the deviation angle to obtain the fused augmented reality content and the adjusted projection optical axis;
and projecting the fused augmented reality content to each object in the real scene by adopting the adjusted projection optical axis to realize the display of the augmented reality content.
2. The method according to claim 1, wherein the adjusting the augmented reality content and the projection optical axis by combining an optical axis offset parameter and a fusion parameter corresponding to the deviation angle to obtain a fused augmented reality content and an adjusted projection optical axis comprises:
adjusting the projection position of the augmented reality content by combining the augmented reality content, the fusion parameter corresponding to the deviation angle and a preset fusion function to obtain the fused augmented reality content;
and determining the adjusted projection optical axis by combining the projection optical axis, the optical axis offset parameter corresponding to the deviation angle and a preset optical axis offset function.
3. The method according to claim 1, wherein before querying the optical axis offset parameter set and the fusion parameter set according to the deviation angle and obtaining the optical axis offset parameter and the fusion parameter corresponding to the deviation angle, the method further comprises:
acquiring training data, wherein the training data comprises: training samples corresponding to all the live-action scenes; the training sample includes: projecting augmented reality contents under an optical axis and the augmented reality contents under various deviation angles;
and training a neural network model by adopting the training data to obtain an optical axis offset parameter set and a fusion parameter set.
4. The method of claim 3, wherein the training a neural network model using the training data to obtain the set of optical axis offset parameters and the set of fusion parameters comprises:
inputting the training samples into a neural network model aiming at each training sample to obtain an output optical axis offset parameter set and a fusion parameter set;
calculating to obtain augmented reality contents under each deviation angle by combining the augmented reality contents under the projection optical axis in the training sample, the output optical axis offset parameter group and the output fusion parameter group;
and comparing the augmented reality content under each deviation angle obtained by calculation with the augmented reality content under each deviation angle in the training sample, and adjusting the coefficient of the neural network model according to the comparison result until the augmented reality content under each deviation angle obtained by calculation is consistent with the augmented reality content under each deviation angle in the training sample.
5. The method of claim 1, wherein the augmented reality content to be projected is determined in a manner,
acquiring a live-action picture corresponding to a live-action;
identifying the live-action picture, and acquiring each object and corresponding position in the live-action picture;
and acquiring corresponding enhanced content for each object in the real scene, and determining the projection position of the corresponding enhanced content according to the position of the object.
6. A method according to claim 1, characterized in that the deviation angle between the projection optical axis and the driver's view angle is determined in such a way that,
detecting eyes of a driver to acquire a visual angle of the driver;
acquiring the angle of a projection optical axis;
and determining a deviation angle between the projection optical axis and the driver visual angle according to the angle of the projection optical axis and the driver visual angle.
7. An augmented reality content display device, comprising:
the determining module is used for determining augmented reality content to be projected and a deviation angle between a projection optical axis and a visual angle of a driver; the augmented reality content includes: enhancing contents corresponding to each object in the real scene and the projection position of the enhancing contents;
the query module is used for querying the optical axis offset parameter group and the fusion parameter group according to the deviation angle, and acquiring the optical axis offset parameter and the fusion parameter corresponding to the deviation angle; the optical axis offset parameter group and the fusion parameter group are output of a neural network model, and the neural network model is obtained by training samples corresponding to all scenes;
the adjusting module is used for adjusting the augmented reality content and the projection optical axis by combining the optical axis offset parameter and the fusion parameter corresponding to the deviation angle to obtain fused augmented reality content and an adjusted projection optical axis;
and the projection module is used for projecting the fused augmented reality contents to each object in the real scene by adopting the adjusted projection optical axis so as to realize the display of the augmented reality contents.
8. The apparatus of claim 7, wherein the adjustment module is specifically configured to,
adjusting the projection position of the augmented reality content by combining the augmented reality content, the fusion parameter corresponding to the deviation angle and a preset fusion function to obtain the fused augmented reality content;
and determining the adjusted projection optical axis by combining the projection optical axis, the optical axis offset parameter corresponding to the deviation angle and a preset optical axis offset function.
9. The apparatus of claim 7, further comprising: an acquisition module and a training module;
the obtaining module is configured to obtain training data, where the training data includes: training samples corresponding to all the live-action scenes; the training sample includes: projecting augmented reality contents under an optical axis and the augmented reality contents under various deviation angles;
and the training module is used for training a neural network model by adopting the training data to obtain an optical axis offset parameter set and a fusion parameter set.
10. The apparatus of claim 9, wherein the training module is specifically configured to,
inputting the training samples into a neural network model aiming at each training sample to obtain an output optical axis offset parameter set and a fusion parameter set;
calculating to obtain augmented reality contents under each deviation angle by combining the augmented reality contents under the projection optical axis in the training sample, the output optical axis offset parameter group and the output fusion parameter group;
and comparing the augmented reality content under each deviation angle obtained by calculation with the augmented reality content under each deviation angle in the training sample, and adjusting the coefficient of the neural network model according to the comparison result until the augmented reality content under each deviation angle obtained by calculation is consistent with the augmented reality content under each deviation angle in the training sample.
11. The apparatus of claim 7, wherein the augmented reality content to be projected is determined in a manner,
acquiring a live-action picture corresponding to a live-action;
identifying the live-action picture, and acquiring each object and corresponding position in the live-action picture;
and acquiring corresponding enhanced content for each object in the real scene, and determining the projection position of the corresponding enhanced content according to the position of the object.
12. The device according to claim 7, characterized in that the deviation angle between the projection optical axis and the driver's viewing angle is determined in such a way that,
detecting eyes of a driver to acquire a visual angle of the driver;
acquiring the angle of a projection optical axis;
and determining a deviation angle between the projection optical axis and the driver visual angle according to the angle of the projection optical axis and the driver visual angle.
13. An augmented reality content display device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the augmented reality content display method according to any one of claims 1 to 6 when executing the program.
14. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the augmented reality content display method of any one of claims 1 to 6.
CN201910326910.1A 2019-04-23 2019-04-23 Augmented reality content display method and device Active CN110099273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910326910.1A CN110099273B (en) 2019-04-23 2019-04-23 Augmented reality content display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910326910.1A CN110099273B (en) 2019-04-23 2019-04-23 Augmented reality content display method and device

Publications (2)

Publication Number Publication Date
CN110099273A CN110099273A (en) 2019-08-06
CN110099273B true CN110099273B (en) 2021-07-30

Family

ID=67445561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910326910.1A Active CN110099273B (en) 2019-04-23 2019-04-23 Augmented reality content display method and device

Country Status (1)

Country Link
CN (1) CN110099273B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820504B (en) * 2022-04-22 2023-03-21 江苏泽景汽车电子股份有限公司 Method and device for detecting image fusion deviation, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8994558B2 (en) * 2012-02-01 2015-03-31 Electronics And Telecommunications Research Institute Automotive augmented reality head-up display apparatus and method
US10198865B2 (en) * 2014-07-10 2019-02-05 Seiko Epson Corporation HMD calibration with direct geometric modeling
KR101713740B1 (en) * 2014-12-08 2017-03-08 현대자동차주식회사 Method and device for displaying augmented reality HUD for vehicle
DE102015216127A1 (en) * 2015-08-24 2017-03-02 Ford Global Technologies, Llc Method for eye tracking in a vehicle with head-up display
JP2017207607A (en) * 2016-05-18 2017-11-24 アルパイン株式会社 Multi layer image display device
CN107027015A (en) * 2017-04-28 2017-08-08 广景视睿科技(深圳)有限公司 3D trends optical projection system based on augmented reality and the projecting method for the system
CN109649275B (en) * 2018-11-29 2020-03-20 福瑞泰克智能系统有限公司 Driving assistance system and method based on augmented reality

Also Published As

Publication number Publication date
CN110099273A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
US11181737B2 (en) Head-up display device for displaying display items having movement attribute or fixed attribute, display control method, and control program
JP4919036B2 (en) Moving object recognition device
CN108025644A (en) Display apparatus and vehicle display methods
US11126875B2 (en) Method and device of multi-focal sensing of an obstacle and non-volatile computer-readable storage medium
US8947533B2 (en) Parameter determining device, parameter determining system, parameter determining method, and recording medium
US20100165105A1 (en) Vehicle-installed image processing apparatus and eye point conversion information generation method
US20200158840A1 (en) Multi-mode multi-sensor calibration
US11157753B2 (en) Road line detection device and road line detection method
US11473921B2 (en) Method of following a vehicle
CN103502876A (en) Method and device for calibrating a projection device of a vehicle
WO2015186294A1 (en) Vehicle-mounted image-processing device
CN110084230B (en) Image-based vehicle body direction detection method and device
JP2019121876A (en) Image processing device, display device, navigation system, image processing method, and program
CN110070623A (en) Guide line draws reminding method, device, computer equipment and storage medium
CN110099273B (en) Augmented reality content display method and device
CN110962858A (en) Target identification method and device
WO2020187978A1 (en) Image processing system and method
KR102071720B1 (en) Method for matching radar target list and target of vision image
CN116071714A (en) Lane departure detection method, system, electronic device, and readable storage medium
CN110895675A (en) Method for determining coordinates of feature points of an object in 3D space
CN117315048B (en) External parameter self-calibration method of vehicle-mounted camera, electronic equipment and storage medium
KR20180026418A (en) Apparatus for matching coordinate of head-up display
US20220001889A1 (en) Method for assisting a user of an assistance system, assistance system and vehicle comprising such a system
CN112485807B (en) Object recognition device
US11993208B2 (en) Image processing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211013

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Patentee before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.