CN114454814B - Human-computer interaction method and device for augmented reality - Google Patents

Human-computer interaction method and device for augmented reality Download PDF

Info

Publication number
CN114454814B
CN114454814B CN202210091709.1A CN202210091709A CN114454814B CN 114454814 B CN114454814 B CN 114454814B CN 202210091709 A CN202210091709 A CN 202210091709A CN 114454814 B CN114454814 B CN 114454814B
Authority
CN
China
Prior art keywords
vehicle
image
virtual object
voice
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210091709.1A
Other languages
Chinese (zh)
Other versions
CN114454814A (en
Inventor
李选正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Space Digital Technology Co ltd
Original Assignee
Shenzhen Space Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Space Digital Technology Co ltd filed Critical Shenzhen Space Digital Technology Co ltd
Priority to CN202210091709.1A priority Critical patent/CN114454814B/en
Publication of CN114454814A publication Critical patent/CN114454814A/en
Application granted granted Critical
Publication of CN114454814B publication Critical patent/CN114454814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a human-computer interaction method and equipment for augmented reality, comprising the following steps: a camera with a scale is arranged in front of a vehicle, and the camera with the scale acquires images in front of the vehicle; extracting each actual object in the image, and determining parameters of each actual object; determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio; displaying the virtual object graph through augmented reality equipment; the driver performs man-machine interaction with the virtual reality device in a voice or gesture mode.

Description

Human-computer interaction method and device for augmented reality
Technical Field
The invention relates to the technical field of man-machine interaction, in particular to a man-machine interaction method and equipment for augmented reality.
Background
The world around us is very rich, and there are many pieces of digital information on an object to be known, and information on the object can be accessed by different methods, for example: querying the name of the object or querying the description information of the object through a search engine. With the popularity of mobile devices, there are various functions on the mobile devices to access object information, and the functions are reality augmentation (Augmented Reality, abbreviated as AR) functions. AR is a technique for performing reality augmentation on a real scene using virtual information. The AR is based on the real physical environment acquired by the acquisition device such as a camera, and the like, and the information generated virtually such as texts, two-dimensional graphs, three-dimensional models and the like is marked on objects in the real physical environment displayed by the display screen, so that annotation and explanation of the display physical environment at the user position or some effects of the real environment are enhanced and emphasized.
In the prior art, the image is projected onto a panel opposite to a driver by using a head-up display device in a vehicle-mounted system by using a reality enhancement technology, and the driver does not need to look at an instrument panel or the like at a low head any more, however, the technical problem in the prior art is that the image projected onto the panel cannot be matched with an actual image, so that the information obtained by the driver is disordered.
Disclosure of Invention
The invention provides a human-computer interaction method and equipment for augmented reality, which are used for solving the problems in the prior art.
The invention provides a human-computer interaction method for augmented reality, which comprises the following steps:
s100, a camera with a scale is arranged in front of a vehicle, and the camera with the scale acquires images in front of the vehicle;
s200, extracting each actual object in the image, and determining parameters of each actual object;
s300, determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio;
s400, displaying the virtual object graph through the augmented reality equipment;
s500, the driver performs man-machine interaction with the virtual reality device in a voice or gesture mode.
Preferably, the step S100 further includes:
s600, setting a common camera behind the vehicle, wherein the common camera collects rear view images behind the vehicle;
s700, carrying out panoramic image fusion on the rearview image and an image before the vehicle is acquired by the camera with the scale, and forming a panoramic image.
Preferably, the S300 includes:
forming the virtual object graph on the vehicle-mounted control system;
further included after S300 is:
s301, a vehicle-mounted display screen is arranged on the vehicle and is connected with a vehicle-mounted control system;
s302, the vehicle-mounted control system fuses the virtual object graph and the image acquired by the camera with the scale to form a virtual reality image;
s303, transmitting the virtual reality image to the vehicle-mounted display screen;
s304, displaying the virtual reality image on the vehicle-mounted display screen.
S305, the driver performs man-machine interaction with the vehicle-mounted display screen in a voice or gesture mode.
Preferably, the S500 includes:
s501, a gesture camera is arranged at a position facing a driver, and action information of the driver is collected;
s502, in the human-computer interaction process, an operation gesture or a voice prompt is displayed on a display end of the augmented reality device;
S503, the driver performs actions or inputs voice instructions according to the displayed operation gestures or voice prompts;
s504, if the action input by the driver is different from the operation gesture displayed on the display end, displaying an action non-standard reminding mark on the display end, and reminding the driver of correctly inputting the action through voice.
Preferably, the step S502 includes:
s5021, setting a database in the augmented reality equipment, wherein the database comprises an operation gesture or voice prompt corresponding to each virtual object, an icon of the operation gesture or voice prompt, a position of the operation gesture or voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or voice prompt when wrong input occurs;
s5022, displaying the virtual object and an icon of an operation gesture or voice prompt corresponding to the virtual object at a preset position on a display end of the augmented reality device;
accordingly, the S504 includes:
s5041, displaying an action non-standard reminding mark on the display end;
s5042, presetting reminding display time and icon display time;
s5043, alternately displaying the icon of the operation gesture or voice prompt and the action non-standard reminding mark according to the reminding display time and the icon display time.
The invention provides a human-computer interaction device for augmented reality, comprising:
the acquisition device is used for arranging a camera with a staff gauge in front of the vehicle, and the camera with the staff gauge acquires images in front of the vehicle;
the parameter determining device is used for extracting each actual object in the image and determining the parameter of each actual object;
virtual object graph determining means for determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio;
a display device for displaying the virtual object graph through the augmented reality equipment;
and the man-machine interaction device is used for carrying out man-machine interaction with the virtual reality equipment by a driver in a voice or gesture mode.
Preferably, the method further comprises:
the rear view image acquisition device is used for arranging a common camera behind the vehicle and acquiring rear view images behind the vehicle;
and the panoramic image forming device is used for carrying out panoramic image fusion on the rearview image and the image before the vehicle is acquired by the camera with the scale to form a panoramic image.
Preferably, the virtual object graphic determining apparatus includes:
The vehicle-mounted control system is used for forming the virtual object graph;
further comprises:
the vehicle-mounted display screen device is used for arranging a vehicle-mounted display screen on the vehicle, and the vehicle-mounted display screen is connected with a vehicle-mounted control system;
the virtual reality image forming device is used for fusing the virtual object graph with the image acquired by the camera with the scale by the vehicle-mounted control system to form a virtual reality image;
the transmission device is used for transmitting the virtual reality image to the vehicle-mounted display screen;
and the vehicle-mounted display screen display device is used for displaying the virtual reality image on the vehicle-mounted display screen.
S305, the driver performs man-machine interaction with the vehicle-mounted display screen in a voice or gesture mode.
Preferably, the man-machine interaction device comprises:
the action information acquisition device is used for arranging a gesture camera at a position opposite to the driver and acquiring action information of the driver;
the operation gesture or voice prompt display device is used for displaying an operation gesture or voice prompt on the display end of the augmented reality equipment in the human-computer interaction process;
the input device is used for the driver to act or input voice instructions according to the displayed operation gestures or voice prompts;
And the reminding device is used for displaying an action nonstandard reminding mark on the display end if the action input by the driver is different from the operation gesture displayed on the display end, and reminding the driver to correctly input the action through voice.
Preferably, the operation gesture or voice prompt display device includes:
a database setting device, configured to set a database in the augmented reality device, where the database includes an operation gesture or a voice prompt corresponding to each virtual object, an icon of the operation gesture or the voice prompt, a position of the operation gesture or the voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or the voice prompt when an error input occurs;
the icon display device is used for displaying the virtual object and icons of operation gestures or voice prompts corresponding to the virtual object at a preset position on the display end of the augmented reality device;
correspondingly, the reminding device comprises:
the mark display device is used for displaying the action non-standard reminding mark on the display end;
the time setting device is used for presetting reminding display time and icon display time;
And the alternate display device is used for alternately displaying the icon of the operation gesture or the voice prompt and the action nonstandard reminding mark through the reminding display time and the icon display time.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flowchart of a human-computer interaction method for augmented reality in an embodiment of the present invention;
FIG. 2 is a flow chart of another method of human-machine interaction for augmented reality in an embodiment of the invention;
fig. 3 is a schematic structural diagram of a man-machine interaction device for augmented reality according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The embodiment of the invention provides a human-computer interaction method for augmented reality, referring to fig. 1 and 2, comprising the following steps:
s100, a camera with a scale is arranged in front of a vehicle, and the camera with the scale acquires images in front of the vehicle;
s200, extracting each actual object in the image, and determining parameters of each actual object;
s300, determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio;
s400, displaying the virtual object graph through the augmented reality equipment;
s500, the driver performs man-machine interaction with the virtual reality device in a voice or gesture mode.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment is that a camera with a staff gauge is arranged in front of a vehicle, and the camera with the staff gauge acquires images in front of the vehicle; extracting each actual object in the image, and determining parameters of each actual object; determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio; displaying the virtual object graph through augmented reality equipment; the driver performs man-machine interaction with the virtual reality device in a voice or gesture mode.
The beneficial effects of the technical scheme are as follows: the scheme provided by the embodiment is adopted, a camera with a staff gauge is arranged in front of a vehicle, and the camera with the staff gauge collects images in front of the vehicle; extracting each actual object in the image, and determining parameters of each actual object; determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio; displaying the virtual object graph through augmented reality equipment; the driver performs man-machine interaction with the virtual reality device in a voice or gesture mode.
The method comprises the steps of collecting images through the camera with the ruler, matching the collected images with virtual object parameters, determining the scaling ratio of the virtual object through the matching result, and further determining the virtual object graph according to the scaling ratio, wherein the virtual object is fused into the real scene to the greatest extent in such a way, and the situation that the scene is unreal due to the fact that the size of the virtual object is too small or too large relative to the real scene is avoided.
In addition, aiming at the problem that distortion or distortion may occur in the scaling process of the virtual object, the following calculation is performed on the images of the virtual object before and after scaling to ensure that the distortion problem does not occur in the scaling process, and the specific operation mode is as follows:
the quality of scaling is determined by determining the displacement difference of all pixel points before and after scaling, and specifically can be determined by an energy difference minimum function:
wherein D (A) is the minimum function of the energy difference before and after scaling, A is the displacement vector, E b E for scaled image a For the image before scaling, i is the pixel point on the image, E a (i) For the i-th pixel point on the image before scaling, n is the number of pixel points, i=1, 2,3 … n,for gradient calculation, (x, y) T =a is a displacement vector, x is a horizontal displacement, y is a vertical displacement; e (E) b [i+(x,y) T ]And (3) the position of the ith pixel point of the zoomed image after passing through the displacement vector A is the position of the ith pixel point.
And determining the similarity before and after scaling of all the pixel points according to the energy difference minimum function, determining the average value of all the pixel points in the image according to the value of the energy difference minimum function of each pixel point, judging the deviation of each pixel point according to the average value, and if the pixel point with the deviation exceeds a threshold value, determining that the quality of the scaled image is problematic and forming a new scaled image through re-scaling processing.
In another embodiment, the step S100 further includes:
s600, setting a common camera behind the vehicle, wherein the common camera collects rear view images behind the vehicle;
s700, carrying out panoramic image fusion on the rearview image and an image before the vehicle is acquired by the camera with the scale, and forming a panoramic image.
The working principle of the technical scheme is as follows: the scheme that this embodiment adopted is set up in front of the vehicle and take the scale camera, take the scale camera to gather the image before the vehicle after still including:
a common camera is arranged behind the vehicle and is used for collecting rear view images behind the vehicle; and carrying out panoramic image fusion on the rearview image and an image before the vehicle is acquired by the camera with the staff gauge to form a panoramic image.
The beneficial effects of the technical scheme are as follows: the scheme that adopts this embodiment to provide set up the area scale camera before the vehicle, still include after taking the image before the scale camera gathers the vehicle: a common camera is arranged behind the vehicle and is used for collecting rear view images behind the vehicle; and carrying out panoramic image fusion on the rearview image and an image before the vehicle is acquired by the camera with the staff gauge to form a panoramic image.
The panoramic image is formed to be more convenient for a driver to check the situation around the vehicle, the panoramic image is displayed through the augmented reality equipment, meanwhile, a virtual object is fused in the panoramic image, and the driver can perform man-machine interaction with the virtual reality equipment in a voice or gesture mode according to the information of the panoramic image.
In another embodiment, the S300 includes:
forming the virtual object graph on the vehicle-mounted control system;
further included after S300 is:
s301, a vehicle-mounted display screen is arranged on the vehicle and is connected with a vehicle-mounted control system;
s302, the vehicle-mounted control system fuses the virtual object graph and the image acquired by the camera with the scale to form a virtual reality image;
s303, transmitting the virtual reality image to the vehicle-mounted display screen;
s304, displaying the virtual reality image on the vehicle-mounted display screen.
S305, the driver performs man-machine interaction with the vehicle-mounted display screen in a voice or gesture mode.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment is that the virtual object graph is formed in the vehicle-mounted control system;
In addition, a vehicle-mounted display screen is arranged on the vehicle and is connected with a vehicle-mounted control system; the vehicle-mounted control system fuses the virtual object graph with the image acquired by the camera with the scale to form a virtual reality image; transmitting the virtual reality image to the vehicle-mounted display screen; and displaying the virtual reality image on the vehicle-mounted display screen. And the driver performs man-machine interaction with the vehicle-mounted display screen in a voice or gesture mode.
The beneficial effects of the technical scheme are as follows: the solution provided by the present embodiment is that the virtual object graph is formed in the vehicle-mounted control system; in addition, a vehicle-mounted display screen is arranged on the vehicle and is connected with a vehicle-mounted control system; the vehicle-mounted control system fuses the virtual object graph with the image acquired by the camera with the scale to form a virtual reality image; transmitting the virtual reality image to the vehicle-mounted display screen; and displaying the virtual reality image on the vehicle-mounted display screen. And the driver performs man-machine interaction with the vehicle-mounted display screen in a voice or gesture mode.
In another embodiment, the S500 includes:
s501, a gesture camera is arranged at a position facing a driver, and action information of the driver is collected;
s502, in the human-computer interaction process, an operation gesture or a voice prompt is displayed on a display end of the augmented reality device;
s503, the driver performs actions or inputs voice instructions according to the displayed operation gestures or voice prompts;
s504, if the action input by the driver is different from the operation gesture displayed on the display end, displaying an action non-standard reminding mark on the display end, and reminding the driver of correctly inputting the action through voice.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment is that a gesture camera is arranged at a position facing a driver, and action information of the driver is collected; in the human-computer interaction process, an operation gesture or a voice prompt is displayed on a display end of the augmented reality device; the driver performs actions or inputs voice instructions according to the displayed operation gestures or voice prompts; if the action input by the driver is different from the operation gesture displayed by the display end, displaying an action non-standard reminding mark on the display end, and reminding the driver of correctly inputting the action through voice.
The beneficial effects of the technical scheme are as follows: by adopting the scheme provided by the embodiment, a gesture camera is arranged at a position facing a driver, and action information of the driver is collected; in the human-computer interaction process, an operation gesture or a voice prompt is displayed on a display end of the augmented reality device; the driver performs actions or inputs voice instructions according to the displayed operation gestures or voice prompts; if the action input by the driver is different from the operation gesture displayed by the display end, displaying an action non-standard reminding mark on the display end, and reminding the driver of correctly inputting the action through voice. The driver is reminded in a gesture and voice mode.
In another embodiment, the S502 includes:
s5021, setting a database in the augmented reality equipment, wherein the database comprises an operation gesture or voice prompt corresponding to each virtual object, an icon of the operation gesture or voice prompt, a position of the operation gesture or voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or voice prompt when wrong input occurs;
s5022, displaying the virtual object and an icon of an operation gesture or voice prompt corresponding to the virtual object at a preset position on a display end of the augmented reality device;
Accordingly, the S504 includes:
s5041, displaying an action non-standard reminding mark on the display end;
s5042, presetting reminding display time and icon display time;
s5043, alternately displaying the icon of the operation gesture or voice prompt and the action non-standard reminding mark according to the reminding display time and the icon display time.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment is that a database is arranged in the augmented reality equipment, wherein the database comprises an operation gesture or a voice prompt corresponding to each virtual object, an icon of the operation gesture or the voice prompt, a position of the operation gesture or the voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or the voice prompt when wrong input occurs; displaying the virtual object and icons of operation gestures or voice prompts corresponding to the virtual object at preset positions on a display end of the augmented reality equipment;
correspondingly, displaying an action non-standard reminding mark on the display end; presetting reminding display time and icon display time; and alternately displaying the icon of the operation gesture or voice prompt and the action non-standard reminding mark according to the reminding display time and the icon display time.
The beneficial effects of the technical scheme are as follows: setting a database in the augmented reality device by adopting the scheme provided by the embodiment, wherein the database comprises an operation gesture or voice prompt corresponding to each virtual object, an icon of the operation gesture or voice prompt, a position of the operation gesture or voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or voice prompt when wrong input occurs; displaying the virtual object and icons of operation gestures or voice prompts corresponding to the virtual object at preset positions on a display end of the augmented reality equipment;
correspondingly, displaying an action non-standard reminding mark on the display end; presetting reminding display time and icon display time; and alternately displaying the icon of the operation gesture or voice prompt and the action non-standard reminding mark according to the reminding display time and the icon display time.
The information displayed on the display end is as follows: the driving method comprises the steps that a virtual object, an operation gesture or voice prompt corresponding to the virtual object, an icon of the operation gesture or voice prompt, a position of the operation gesture or voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or voice prompt when wrong input occurs are displayed on a display end, a driver can conveniently see corresponding information on the display screen of the display end and timely make corresponding judgment according to displayed content, and driving experience is improved.
In another embodiment, a human-computer interaction device for augmented reality is provided, referring to fig. 3, including:
the acquisition device is used for arranging a camera with a staff gauge in front of the vehicle, and the camera with the staff gauge acquires images in front of the vehicle;
the parameter determining device is used for extracting each actual object in the image and determining the parameter of each actual object;
virtual object graph determining means for determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio;
a display device for displaying the virtual object graph through the augmented reality equipment;
and the man-machine interaction device is used for carrying out man-machine interaction with the virtual reality equipment by a driver in a voice or gesture mode.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment comprises the following steps: the acquisition device is used for arranging a camera with a staff gauge in front of the vehicle, and the camera with the staff gauge acquires images in front of the vehicle; the parameter determining device is used for extracting each actual object in the image and determining the parameter of each actual object; virtual object graph determining means for determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio; a display device for displaying the virtual object graph through the augmented reality equipment; and the man-machine interaction device is used for carrying out man-machine interaction with the virtual reality equipment by a driver in a voice or gesture mode.
The beneficial effects of the technical scheme are as follows: the scheme provided by the embodiment comprises the following steps: the acquisition device is used for arranging a camera with a staff gauge in front of the vehicle, and the camera with the staff gauge acquires images in front of the vehicle; the parameter determining device is used for extracting each actual object in the image and determining the parameter of each actual object; virtual object graph determining means for determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio; a display device for displaying the virtual object graph through the augmented reality equipment; and the man-machine interaction device is used for carrying out man-machine interaction with the virtual reality equipment by a driver in a voice or gesture mode.
The method comprises the steps of collecting images through the camera with the ruler, matching the collected images with virtual object parameters, determining the scaling ratio of the virtual object through the matching result, and further determining the virtual object graph according to the scaling ratio, wherein the virtual object is fused into the real scene to the greatest extent in such a way, and the situation that the scene is unreal due to the fact that the size of the virtual object is too small or too large relative to the real scene is avoided.
In another embodiment, the method further comprises:
the rear view image acquisition device is used for arranging a common camera behind the vehicle and acquiring rear view images behind the vehicle;
and the panoramic image forming device is used for carrying out panoramic image fusion on the rearview image and the image before the vehicle is acquired by the camera with the scale to form a panoramic image.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment further comprises: the rear view image acquisition device is used for arranging a common camera behind the vehicle and acquiring rear view images behind the vehicle; and the panoramic image forming device is used for carrying out panoramic image fusion on the rearview image and the image before the vehicle is acquired by the camera with the scale to form a panoramic image.
The beneficial effects of the technical scheme are as follows: the scheme provided by the embodiment further comprises the following steps: the rear view image acquisition device is used for arranging a common camera behind the vehicle and acquiring rear view images behind the vehicle; and the panoramic image forming device is used for carrying out panoramic image fusion on the rearview image and the image before the vehicle is acquired by the camera with the scale to form a panoramic image.
The panoramic image is formed to be more convenient for a driver to check the situation around the vehicle, the panoramic image is displayed through the augmented reality equipment, meanwhile, a virtual object is fused in the panoramic image, and the driver can perform man-machine interaction with the virtual reality equipment in a voice or gesture mode according to the information of the panoramic image.
In another embodiment, the virtual object graphic determining apparatus includes:
the vehicle-mounted control system is used for forming the virtual object graph;
further comprises:
the vehicle-mounted display screen device is used for arranging a vehicle-mounted display screen on the vehicle, and the vehicle-mounted display screen is connected with a vehicle-mounted control system;
the virtual reality image forming device is used for fusing the virtual object graph with the image acquired by the camera with the scale by the vehicle-mounted control system to form a virtual reality image;
the transmission device is used for transmitting the virtual reality image to the vehicle-mounted display screen;
and the vehicle-mounted display screen display device is used for displaying the virtual reality image on the vehicle-mounted display screen.
And the interaction device is used for performing man-machine interaction with the vehicle-mounted display screen by a driver in a voice or gesture mode.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment is that the virtual object graph determining device comprises: the vehicle-mounted control system is used for forming the virtual object graph;
in addition, the method further comprises the steps of: the vehicle-mounted display screen device is used for arranging a vehicle-mounted display screen on the vehicle, and the vehicle-mounted display screen is connected with a vehicle-mounted control system; the virtual reality image forming device is used for fusing the virtual object graph with the image acquired by the camera with the scale by the vehicle-mounted control system to form a virtual reality image; the transmission device is used for transmitting the virtual reality image to the vehicle-mounted display screen; and the vehicle-mounted display screen display device is used for displaying the virtual reality image on the vehicle-mounted display screen. And the interaction device is used for performing man-machine interaction with the vehicle-mounted display screen by a driver in a voice or gesture mode.
The beneficial effects of the technical scheme are as follows: the virtual object graph determining device adopting the scheme provided by the embodiment comprises: the vehicle-mounted control system is used for forming the virtual object graph;
in addition, the method further comprises the steps of: the vehicle-mounted display screen device is used for arranging a vehicle-mounted display screen on the vehicle, and the vehicle-mounted display screen is connected with a vehicle-mounted control system; the virtual reality image forming device is used for fusing the virtual object graph with the image acquired by the camera with the scale by the vehicle-mounted control system to form a virtual reality image; the transmission device is used for transmitting the virtual reality image to the vehicle-mounted display screen; and the vehicle-mounted display screen display device is used for displaying the virtual reality image on the vehicle-mounted display screen. And the interaction device is used for performing man-machine interaction with the vehicle-mounted display screen by a driver in a voice or gesture mode.
In another embodiment, the human-computer interaction device includes:
the action information acquisition device is used for arranging a gesture camera at a position opposite to the driver and acquiring action information of the driver;
the operation gesture or voice prompt display device is used for displaying an operation gesture or voice prompt on the display end of the augmented reality equipment in the human-computer interaction process;
The input device is used for the driver to act or input voice instructions according to the displayed operation gestures or voice prompts;
and the reminding device is used for displaying an action nonstandard reminding mark on the display end if the action input by the driver is different from the operation gesture displayed on the display end, and reminding the driver to correctly input the action through voice.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment is that the man-machine interaction device comprises: the action information acquisition device is used for arranging a gesture camera at a position opposite to the driver and acquiring action information of the driver; the operation gesture or voice prompt display device is used for displaying an operation gesture or voice prompt on the display end of the augmented reality equipment in the human-computer interaction process; the input device is used for the driver to act or input voice instructions according to the displayed operation gestures or voice prompts; and the reminding device is used for displaying an action nonstandard reminding mark on the display end if the action input by the driver is different from the operation gesture displayed on the display end, and reminding the driver to correctly input the action through voice.
The beneficial effects of the technical scheme are as follows: the man-machine interaction device adopting the scheme provided by the embodiment comprises: the action information acquisition device is used for arranging a gesture camera at a position opposite to the driver and acquiring action information of the driver; the operation gesture or voice prompt display device is used for displaying an operation gesture or voice prompt on the display end of the augmented reality equipment in the human-computer interaction process; the input device is used for the driver to act or input voice instructions according to the displayed operation gestures or voice prompts; and the reminding device is used for displaying an action nonstandard reminding mark on the display end if the action input by the driver is different from the operation gesture displayed on the display end, and reminding the driver to correctly input the action through voice. The driver is reminded in a gesture and voice mode.
In another embodiment, the operation gesture or voice prompt display device includes:
a database setting device, configured to set a database in the augmented reality device, where the database includes an operation gesture or a voice prompt corresponding to each virtual object, an icon of the operation gesture or the voice prompt, a position of the operation gesture or the voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or the voice prompt when an error input occurs;
the icon display device is used for displaying the virtual object and icons of operation gestures or voice prompts corresponding to the virtual object at a preset position on the display end of the augmented reality device;
correspondingly, the reminding device comprises:
the mark display device is used for displaying the action non-standard reminding mark on the display end;
the time setting device is used for presetting reminding display time and icon display time;
and the alternate display device is used for alternately displaying the icon of the operation gesture or the voice prompt and the action nonstandard reminding mark through the reminding display time and the icon display time.
The working principle of the technical scheme is as follows: the scheme adopted by the embodiment is that the operation gesture or voice prompt display device comprises: a database setting device, configured to set a database in the augmented reality device, where the database includes an operation gesture or a voice prompt corresponding to each virtual object, an icon of the operation gesture or the voice prompt, a position of the operation gesture or the voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or the voice prompt when an error input occurs; the icon display device is used for displaying the virtual object and icons of operation gestures or voice prompts corresponding to the virtual object at a preset position on the display end of the augmented reality device;
Correspondingly, the reminding device comprises: the mark display device is used for displaying the action non-standard reminding mark on the display end; the time setting device is used for presetting reminding display time and icon display time; and the alternate display device is used for alternately displaying the icon of the operation gesture or the voice prompt and the action nonstandard reminding mark through the reminding display time and the icon display time.
The beneficial effects of the technical scheme are as follows: the operation gesture or voice prompt display device adopting the scheme provided by the embodiment comprises: a database setting device, configured to set a database in the augmented reality device, where the database includes an operation gesture or a voice prompt corresponding to each virtual object, an icon of the operation gesture or the voice prompt, a position of the operation gesture or the voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or the voice prompt when an error input occurs; the icon display device is used for displaying the virtual object and icons of operation gestures or voice prompts corresponding to the virtual object at a preset position on the display end of the augmented reality device;
Correspondingly, the reminding device comprises: the mark display device is used for displaying the action non-standard reminding mark on the display end; the time setting device is used for presetting reminding display time and icon display time; and the alternate display device is used for alternately displaying the icon of the operation gesture or the voice prompt and the action nonstandard reminding mark through the reminding display time and the icon display time.
The information displayed on the display end is as follows: the driving method comprises the steps that a virtual object, an operation gesture or voice prompt corresponding to the virtual object, an icon of the operation gesture or voice prompt, a position of the operation gesture or voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or voice prompt when wrong input occurs are displayed on a display end, a driver can conveniently see corresponding information on the display screen of the display end and timely make corresponding judgment according to displayed content, and driving experience is improved.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

1. The human-computer interaction method for augmented reality is characterized by comprising the following steps of:
s100, a camera with a scale is arranged in front of a vehicle, and the camera with the scale acquires images in front of the vehicle;
s200, extracting each actual object in the image, and determining parameters of each actual object;
s300, determining parameters of the virtual object; matching the actual object with the virtual object, determining the scaling ratio of the virtual object, and determining the virtual object graph according to the scaling ratio;
s400, displaying the virtual object graph through the augmented reality equipment;
s500, the driver performs man-machine interaction with the virtual reality equipment in a voice or gesture mode;
the following calculation is performed on the images of the virtual objects before and after scaling to ensure that the images cannot be distorted in the scaling process, and the specific operation mode is as follows:
the quality of scaling is determined by determining the displacement difference of all pixel points before and after scaling, and specifically can be determined by an energy difference minimum function:
wherein ,for the minimum function of the energy difference before and after scaling, A is the displacement vector, +.>For scaled images, ++>For the pre-scaled image, i is the pixel point on the image, < >>For the i-th pixel point on the image before scaling, n is the number of pixel points, i=1, 2,3 … n, +. >For gradient calculation +.>=a is a displacement vector, x is a horizontal displacement, y is a vertical displacement;the position of the ith pixel point of the zoomed image after passing through the displacement vector A is the position; />Representing the weight;
the similarity before and after scaling of all the pixel points can be determined according to the energy difference minimum function, the average value of all the pixel points in the image is determined according to the value of the energy difference minimum function of each pixel point, the deviation of each pixel point is judged according to the average value, if the pixel point with the deviation exceeds a threshold value, the quality of the scaled image is considered to be problematic, and a new scaled image is required to be formed through re-scaling processing;
the S500 includes:
s501, a gesture camera is arranged at a position facing a driver, and action information of the driver is collected;
s502, in the human-computer interaction process, an operation gesture or a voice prompt is displayed on a display end of the augmented reality device;
s503, the driver performs actions or inputs voice instructions according to the displayed operation gestures or voice prompts;
s504, if the action input by the driver is different from the operation gesture displayed by the display end, displaying an action non-standard reminding mark on the display end, and reminding the driver of correctly inputting the action through voice;
The S502 includes:
s5021, setting a database in the augmented reality equipment, wherein the database comprises an operation gesture or voice prompt corresponding to each virtual object, an icon of the operation gesture or voice prompt, a position of the operation gesture or voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or voice prompt when wrong input occurs;
s5022, displaying the virtual object and an icon of an operation gesture or voice prompt corresponding to the virtual object at a preset position on a display end of the augmented reality device;
accordingly, the S504 includes:
s5041, displaying an action non-standard reminding mark on the display end;
s5042, presetting reminding display time and icon display time;
s5043, alternately displaying the icon of the operation gesture or voice prompt and the action non-standard reminding mark according to the reminding display time and the icon display time.
2. The human-computer interaction method of augmented reality according to claim 1, wherein the step S100 further comprises:
s600, setting a common camera behind the vehicle, wherein the common camera collects rear view images behind the vehicle;
S700, carrying out panoramic image fusion on the rearview image and an image before the vehicle is acquired by the camera with the scale, and forming a panoramic image.
3. The human-computer interaction method of augmented reality according to claim 1, wherein the S300 comprises:
forming the virtual object graph in a vehicle-mounted control system;
further included after S300 is:
s301, a vehicle-mounted display screen is arranged on the vehicle and is connected with a vehicle-mounted control system;
s302, the vehicle-mounted control system fuses the virtual object graph and the image acquired by the camera with the scale to form a virtual reality image;
s303, transmitting the virtual reality image to the vehicle-mounted display screen;
s304, displaying the virtual reality image on the vehicle-mounted display screen;
s305, the driver performs man-machine interaction with the vehicle-mounted display screen in a voice or gesture mode.
4. An augmented reality human-machine interaction device, comprising:
the acquisition device is used for arranging a camera with a staff gauge in front of the vehicle, and the camera with the staff gauge acquires images in front of the vehicle;
the parameter determining device is used for extracting each actual object in the image and determining the parameter of each actual object;
Virtual object graph determining means for determining parameters of the virtual object; matching an actual object with a virtual object, determining a scaling ratio of the virtual object, and determining a virtual object graph according to the scaling ratio;
a display device for displaying the virtual object graph through the augmented reality equipment;
the man-machine interaction device is used for carrying out man-machine interaction with the virtual reality equipment by a driver in a voice or gesture mode;
the following calculation is performed on the images of the virtual objects before and after scaling to ensure that the images cannot be distorted in the scaling process, and the specific operation mode is as follows:
the quality of scaling is determined by determining the displacement difference of all pixel points before and after scaling, and specifically can be determined by an energy difference minimum function:
wherein ,for the minimum function of the energy difference before and after scaling, A is the displacement vector, +.>For scaled images, ++>For the pre-scaled image, i is the pixel point on the image, < >>For the i-th pixel point on the image before scaling, n is the number of pixel points, i=1, 2,3 … n, +.>For gradient calculation +.>=a is a displacement vector, x is a horizontal displacement, y is a vertical displacement;the position of the ith pixel point of the zoomed image after passing through the displacement vector A is the position; / >Representing the weight;
the similarity before and after scaling of all the pixel points can be determined according to the energy difference minimum function, the average value of all the pixel points in the image is determined according to the value of the energy difference minimum function of each pixel point, the deviation of each pixel point is judged according to the average value, if the pixel point with the deviation exceeds a threshold value, the quality of the scaled image is considered to be problematic, and a new scaled image is required to be formed through re-scaling processing;
the man-machine interaction device comprises:
the action information acquisition device is used for arranging a gesture camera at a position opposite to the driver and acquiring action information of the driver;
the operation gesture or voice prompt display device is used for displaying an operation gesture or voice prompt on the display end of the augmented reality equipment in the human-computer interaction process;
the input device is used for the driver to act or input voice instructions according to the displayed operation gestures or voice prompts;
the reminding device is used for displaying an action nonstandard reminding mark on the display end if the action input by the driver is different from the operation gesture displayed on the display end, and reminding the driver to correctly input the action through voice;
the operation gesture or voice prompt display device includes:
A database setting device, configured to set a database in the augmented reality device, where the database includes an operation gesture or a voice prompt corresponding to each virtual object, an icon of the operation gesture or the voice prompt, a position of the operation gesture or the voice prompt relative to the virtual object, and an irregular reminding mark corresponding to the icon of the operation gesture or the voice prompt when an error input occurs;
the icon display device is used for displaying the virtual object and icons of operation gestures or voice prompts corresponding to the virtual object at a preset position on the display end of the augmented reality device;
correspondingly, the reminding device comprises:
the mark display device is used for displaying the action non-standard reminding mark on the display end;
the time setting device is used for presetting reminding display time and icon display time;
and the alternate display device is used for alternately displaying the icon of the operation gesture or the voice prompt and the action nonstandard reminding mark through the reminding display time and the icon display time.
5. The augmented reality human-machine interaction device of claim 4, further comprising:
The rear view image acquisition device is used for arranging a common camera behind the vehicle and acquiring rear view images behind the vehicle;
and the panoramic image forming device is used for carrying out panoramic image fusion on the rearview image and the image before the vehicle is acquired by the camera with the scale to form a panoramic image.
6. The augmented reality human-machine interaction device of claim 4, wherein the virtual object graphics determining means comprises:
the vehicle-mounted control system is used for forming the virtual object graph;
further comprises:
the vehicle-mounted display screen device is used for arranging a vehicle-mounted display screen on the vehicle, and the vehicle-mounted display screen is connected with a vehicle-mounted control system;
the virtual reality image forming device is used for fusing the virtual object graph with the image acquired by the camera with the scale by the vehicle-mounted control system to form a virtual reality image;
the transmission device is used for transmitting the virtual reality image to the vehicle-mounted display screen;
the vehicle-mounted display screen display device is used for displaying the virtual reality image on the vehicle-mounted display screen;
and the interaction device is used for performing man-machine interaction with the vehicle-mounted display screen by a driver in a voice or gesture mode.
CN202210091709.1A 2022-01-26 2022-01-26 Human-computer interaction method and device for augmented reality Active CN114454814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210091709.1A CN114454814B (en) 2022-01-26 2022-01-26 Human-computer interaction method and device for augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210091709.1A CN114454814B (en) 2022-01-26 2022-01-26 Human-computer interaction method and device for augmented reality

Publications (2)

Publication Number Publication Date
CN114454814A CN114454814A (en) 2022-05-10
CN114454814B true CN114454814B (en) 2023-08-11

Family

ID=81412134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210091709.1A Active CN114454814B (en) 2022-01-26 2022-01-26 Human-computer interaction method and device for augmented reality

Country Status (1)

Country Link
CN (1) CN114454814B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003131785A (en) * 2001-10-22 2003-05-09 Toshiba Corp Interface device, operation control method and program product
EP2711804A1 (en) * 2012-09-25 2014-03-26 Advanced Digital Broadcast S.A. Method for providing a gesture-based user interface
CN105527710A (en) * 2016-01-08 2016-04-27 北京乐驾科技有限公司 Intelligent head-up display system
CN107554425A (en) * 2017-08-23 2018-01-09 江苏泽景汽车电子股份有限公司 A kind of vehicle-mounted head-up display AR HUD of augmented reality
CN108334199A (en) * 2018-02-12 2018-07-27 华南理工大学 The multi-modal exchange method of movable type based on augmented reality and device
CN111086453A (en) * 2019-12-30 2020-05-01 深圳疆程技术有限公司 HUD augmented reality display method and device based on camera and automobile
CN112297842A (en) * 2019-07-31 2021-02-02 宝马股份公司 Autonomous vehicle with multiple display modes

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003131785A (en) * 2001-10-22 2003-05-09 Toshiba Corp Interface device, operation control method and program product
EP2711804A1 (en) * 2012-09-25 2014-03-26 Advanced Digital Broadcast S.A. Method for providing a gesture-based user interface
CN105527710A (en) * 2016-01-08 2016-04-27 北京乐驾科技有限公司 Intelligent head-up display system
CN107554425A (en) * 2017-08-23 2018-01-09 江苏泽景汽车电子股份有限公司 A kind of vehicle-mounted head-up display AR HUD of augmented reality
CN108334199A (en) * 2018-02-12 2018-07-27 华南理工大学 The multi-modal exchange method of movable type based on augmented reality and device
CN112297842A (en) * 2019-07-31 2021-02-02 宝马股份公司 Autonomous vehicle with multiple display modes
CN111086453A (en) * 2019-12-30 2020-05-01 深圳疆程技术有限公司 HUD augmented reality display method and device based on camera and automobile

Also Published As

Publication number Publication date
CN114454814A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2021197189A1 (en) Augmented reality-based information display method, system and apparatus, and projection device
CN111783820B (en) Image labeling method and device
JP5724543B2 (en) Terminal device, object control method, and program
EP2075761B1 (en) Method and device for adjusting output frame
EP3316080B1 (en) Virtual reality interaction method, apparatus and system
US20050116964A1 (en) Image reproducing method and apparatus for displaying annotations on a real image in virtual space
JP6335556B2 (en) Information query by pointing
EP4339938A1 (en) Projection method and apparatus, and vehicle and ar-hud
WO2005069170A1 (en) Image file list display device
WO2023071834A1 (en) Alignment method and alignment apparatus for display device, and vehicle-mounted display system
CN109271023B (en) Selection method based on three-dimensional object outline free-hand gesture action expression
US20220375258A1 (en) Image processing method and apparatus, device and storage medium
US8643679B2 (en) Storage medium storing image conversion program and image conversion apparatus
CN111432123A (en) Image processing method and device
CN112242009A (en) Display effect fusion method, system, storage medium and main control unit
TW201919390A (en) Display system and method thereof
CN114454814B (en) Human-computer interaction method and device for augmented reality
CN113128295A (en) Method and device for identifying dangerous driving state of vehicle driver
JP2012212338A (en) Image processing determination device
WO2005076122A1 (en) Method of performing a panoramic demonstration of liquid crystal panel image simulation in view of observer&#39;s viewing angle
CN115268658A (en) Multi-party remote space delineation marking method based on augmented reality
CN115309113A (en) Guiding method for part assembly and related equipment
CN111651043B (en) Augmented reality system supporting customized multi-channel interaction
CN114299809A (en) Direction information display method, display device, electronic equipment and readable storage medium
CN212873085U (en) Head-up display system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230719

Address after: 518000 Crown Industrial Zone Factory Building, 21 Tairan 9 Road, Tian'an Community, Shatou Street, Futian District, Shenzhen City, Guangdong Province, 1 3-storey C1303

Applicant after: SHENZHEN SPACE DIGITAL TECHNOLOGY Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1 Qianhai Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong.

Applicant before: SHENZHEN SHIKONG TECHNOLOGY GROUP CO.,LTD.

GR01 Patent grant
GR01 Patent grant