WO2020151430A1 - 一种空气成像系统及其实现方法 - Google Patents

一种空气成像系统及其实现方法 Download PDF

Info

Publication number
WO2020151430A1
WO2020151430A1 PCT/CN2019/126949 CN2019126949W WO2020151430A1 WO 2020151430 A1 WO2020151430 A1 WO 2020151430A1 CN 2019126949 W CN2019126949 W CN 2019126949W WO 2020151430 A1 WO2020151430 A1 WO 2020151430A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
user
voice
input
unit
Prior art date
Application number
PCT/CN2019/126949
Other languages
English (en)
French (fr)
Inventor
李新福
Original Assignee
广东康云科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东康云科技有限公司 filed Critical 广东康云科技有限公司
Publication of WO2020151430A1 publication Critical patent/WO2020151430A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the invention relates to the field of imaging technology, in particular to an air imaging system and an implementation method thereof.
  • Air imaging technology forms images of objects in the air, so that people can see the images of objects without the aid of auxiliary devices such as VR glasses, giving people a strong visual shock effect, and attracting more and more people's attention and pursuit.
  • auxiliary devices such as VR glasses
  • most of the current air imaging technologies can only form the phantom of the object in the air, like a mirage, visible but intangible, and it is impossible to control the viewing angle and color change of the phantom by interacting with the audience. The operation is not convenient enough and the functions are not rich enough.
  • the purpose of the present invention is to provide a convenient and feature-rich air imaging system and an implementation method thereof.
  • An air imaging system including:
  • the display device is used to form a three-dimensional model of the object in the air through air imaging
  • a signal detection device for detecting an input signal includes at least one of a gesture signal, a somatosensory signal, a brain wave signal, an eye movement signal, a voice signal, a touch signal, and an image signal;
  • the control device is used to control the display device according to the input signal.
  • the signal detection device includes:
  • Somatosensory sensor used to obtain the input somatosensory signal
  • Gesture sensor used to obtain the input gesture signal
  • Eye tracker used to obtain the input eye movement signal
  • Touch module used to obtain the input touch signal
  • Voice collection module used to obtain the input voice signal
  • Brain wave acquisition device for acquiring input brain wave signals
  • the camera is used to obtain the input image signal.
  • control device includes:
  • the somatosensory recognition unit is used to identify the somatosensory actions of the user according to the acquired somatosensory signals;
  • the gesture recognition unit is used to recognize the user's gesture according to the acquired gesture signal
  • the eye movement unit is used to identify the user's eye movement according to the acquired eye movement signal
  • the touch signal recognition unit is used to recognize the user's touch instruction according to the acquired touch signal
  • the voice intercom unit is used to recognize the user's voice command according to the acquired voice signal
  • the brain wave signal identification unit is used to identify the user's brain wave according to the acquired brain wave signal
  • the face recognition unit is used to perform user face recognition according to the acquired image signal
  • the intelligent control unit is used to trigger the control signal according to at least one of the user's face recognition result, the user's somatosensory movement, the user's gesture, the user's eye movement, the user's touch instruction, the user's voice instruction, and the user's brain wave,
  • the intelligent control includes intelligent recognition of the three-dimensional model of the object and multi-national voice commentary, AI intelligent voice answer, automatic navigation, scene switching, mode switching and model special effects control;
  • the storage unit is used to locally store the three-dimensional model and control signals of the object.
  • the communication system includes:
  • Voice message unit used to provide voice message service
  • SMS message unit used to provide SMS message service
  • Telephone customer service unit used to provide telephone customer service
  • Robot customer service unit used to provide robot voice customer service
  • the live video customer service unit is used to provide live video customer service.
  • the big data analysis module includes:
  • New user statistics unit for counting the number of new users
  • User retention statistics unit for counting the number of retained users
  • Activity analysis unit used to analyze user activity
  • User information analysis unit used to analyze the user's gender, age, registration information, IP distribution and regional distribution;
  • the hot spot analysis unit is used to analyze the user's viewing hot spots and generate corresponding heat maps
  • a user viewing behavior analysis unit for performing user viewing behavior analysis.
  • the user viewing behavior analysis includes face recognition analysis, somatosensory action analysis, gesture analysis, eye tracking, mouse browsing trajectory analysis and video recording, voice analysis, and brain wave analysis At least one of
  • the information sharing unit is used to share and publish the analysis results of the big data analysis module.
  • a back-end server for communicating with the control device, remotely storing the three-dimensional model of the object, multi-national voice commentary content and AI intelligent voice answer content, and remotely controlling the display content of the display device through the control device.
  • An implementation method of an air imaging system includes the following steps:
  • the input signal includes at least one of a gesture signal, a somatosensory signal, a brain wave signal, an eye movement signal, a voice signal, a touch signal, and an image signal;
  • step of detecting the input signal specifically includes:
  • step of performing display control on the display device according to the input signal specifically includes:
  • the control signal is triggered according to at least one of the user's face recognition result, the user's somatosensory movement, the user's gesture, the user's eyeball movement, the user's touch instruction, the user's voice instruction, and the user's brain wave to intelligently perform the display device Control
  • the intelligent control includes intelligent recognition of the three-dimensional model of the object and multi-national voice commentary, AI intelligent voice answer, automatic navigation, scene switching, mode switching and model special effects control;
  • the 3D model and control signal of the object are stored locally.
  • the three-dimensional model of the object, multi-national voice commentary content and AI intelligent voice answer content are stored remotely through the back-end server, and the display content of the display device is remotely controlled.
  • an air imaging system of the present invention and its implementation method use the display device to form a three-dimensional model of the object in the air through air imaging, and perform display control on the display device according to the input signal.
  • the real image of the three-dimensional model of the object is displayed in the air through air imaging, and it can be combined with gesture signals, somatosensory signals, brain wave signals, eye movement signals, voice signals, touch signals and image signals, etc., and the input generated by interaction with the audience
  • the signal performs control operations such as viewing angle switching and color conversion on the three-dimensional model of the object, which is more convenient and more functional.
  • FIG. 1 is a structural block diagram of an air imaging system provided by an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a control device according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of an implementation method of an air imaging system provided by an embodiment of the present invention.
  • first, second, third, etc. may be used in this disclosure to describe various elements, these elements should not be limited to these terms. These terms are only used to distinguish elements of the same type from each other.
  • first element may also be referred to as the second element, and similarly, the second element may also be referred to as the first element.
  • second element may also be referred to as the first element.
  • the use of any and all examples or exemplary language (“such as”, “such as”, etc.) provided herein is only intended to better illustrate the embodiments of the present invention, and unless otherwise required, will not impose limitations on the scope of the present invention .
  • an embodiment of the present invention provides an air imaging system, including:
  • the display device is used to form a three-dimensional model of the object in the air through air imaging
  • a signal detection device for detecting an input signal includes at least one of a gesture signal, a somatosensory signal, a brain wave signal, an eye movement signal, a voice signal, a touch signal, and an image signal;
  • the control device is used to control the display device according to the input signal.
  • objects include objects (such as commodities) and environments (such as the indoor environment of a museum).
  • the three-dimensional model of the object can be collected by manual or automatic scanning equipment (such as cameras, aerial drones, automatic scanning robots, etc.) of the three-dimensional data of the object (two-dimensional image, object point cloud data), and then sent to The real three-dimensional model obtained in advance by the cloud or back-end server for repairing, rendering, and optimizing.
  • the three-dimensional model of the object is available for users to browse or watch 360 degrees without blind spots.
  • the display device projects the three-dimensional model of the object into the imaging area in the air (generally, it can be set or adjusted in advance), which is completely imaged in the air without any projection screen.
  • the display device can use medialess air imaging technology (such as diffractive optical imaging technology) to project a three-dimensional model of the object into the air through an optical system to form a real image instead of a "phantom image.”
  • the signal detection device is mainly used to obtain the interactive operation signal of the user on the three-dimensional model of the object.
  • the signal detection device can be arranged on the body of the display device, or located in the imaging area of the display device in the air, or other detectable locations in the air.
  • the control device performs display control of the display device according to the input signal, including: viewing angle switching, color conversion, zoom switching, etc. of the three-dimensional model of the object, as well as the exploded view, perspective view, transformer special effects and fluid special effects of the three-dimensional model of the display object Wait.
  • Data can be transmitted between the control device and the signal detection device and the display device in a wired or wireless manner, including but not limited to HDMI, VGA, USB, WIFI, Bluetooth, and infrared signal connections.
  • the control device may be software, hardware, firmware, and combinations thereof.
  • the control device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a smart watch, a computer, an industrial computer, etc.
  • this embodiment uses the display device to form a three-dimensional model of the object in the air through air imaging, and controls the display of the display device according to the input signal, which can not only display the object in the air through air imaging
  • the real image of the three-dimensional model of the object can be combined with the input signal generated by the interaction with the audience such as gesture signal, somatosensory signal, brain wave signal, eye movement signal, voice signal, touch signal and image signal to switch the viewing angle of the three-dimensional model of the object , Color change and other control operations are more convenient and more functional.
  • the signal detection device includes:
  • Somatosensory sensor used to obtain the input somatosensory signal
  • Gesture sensor used to obtain the input gesture signal
  • Eye tracker used to obtain the input eye movement signal
  • Touch module used to obtain the input touch signal
  • Voice collection module used to obtain the input voice signal
  • Brain wave acquisition device for acquiring input brain wave signals
  • the camera is used to obtain the input image signal.
  • the somatosensory sensor is used to capture the user's somatosensory signal (such as the user's swaying, shaking left and right, etc.).
  • Gesture sensor used to capture the user's gesture signal (such as the user's palm thumb motion signal, etc.).
  • Eye tracker used to capture the user's eye movements. There are two purposes of eye tracking: one is to automatically control the display device to switch the display content according to the focus of the eyeball. For example, if the user's eyeball stays or stares at one of the multiple three-dimensional models for a few seconds, then Control the display device to automatically switch to the scene of the three-dimensional model for further detailed display; 2. Real-time monitoring of the user's eye movements.
  • the touch module is used for the user to input touch commands through touch.
  • a virtual touch button also generated by air imaging technology
  • a touch screen can be provided on the display device to obtain the user touch signal.
  • the voice collection module is used to collect the user's voice signal.
  • the user's voice signal can be the user's voice command signals such as "open the door”, “turn on the air conditioner in the car", “change the color of the car body to black", etc.
  • the voice collection module may be a voice collection device such as a microphone or a microphone.
  • the brain wave acquisition device is used to collect the user's brain wave signal in order to identify the user's ideas or thoughts, thereby controlling the display device to perform corresponding operations on the three-dimensional model of the object, such as color switching, visual switching, and so on.
  • the camera is used to capture the user's image.
  • the camera may adopt an RGB-D camera capable of simultaneously collecting two-dimensional face image information and depth information to obtain more accurate user images.
  • control device includes:
  • the somatosensory recognition unit is used to identify the somatosensory actions of the user according to the acquired somatosensory signals;
  • the gesture recognition unit is used to recognize the user's gesture according to the acquired gesture signal
  • the eye movement unit is used to identify the user's eye movement according to the acquired eye movement signal
  • the touch signal recognition unit is used to recognize the user's touch instruction according to the acquired touch signal
  • the voice intercom unit is used to recognize the user's voice command according to the acquired voice signal
  • the brain wave signal identification unit is used to identify the user's brain wave according to the acquired brain wave signal
  • the face recognition unit is used to perform user face recognition (including user identity recognition and facial expression recognition, etc.) according to the acquired image signal;
  • the intelligent control unit is used to trigger the control signal according to at least one of the user's face recognition result, the user's somatosensory movement, the user's gesture, the user's eye movement, the user's touch instruction, the user's voice instruction, and the user's brain wave,
  • the intelligent control includes intelligent recognition of the three-dimensional model of the object and multi-national voice commentary, AI intelligent voice answer, automatic navigation, scene switching, mode switching and model special effects control;
  • the storage unit is used to locally store the three-dimensional model and control signals of the object.
  • identifying the user’s eye movements is mainly to identify the focus of the viewer’s attention (when seeing the part of interest, the user’s eyeballs will be different from the situation where the user’s eyeballs are not seen, such as the time the eyeballs stay on is different).
  • the control device controls the display device to switch the three-dimensional model of the object to the focus or details of the viewer's attention (if it is recognized that the user is paying attention to the car body, it will directly switch to the three-dimensional model of the car body).
  • Face recognition is mainly used to identify the identity and facial expressions of the viewer, and to recognize the viewer's current mood (such as happy, unhappy, etc.) through facial expressions.
  • Intelligent recognition of the three-dimensional model of an object means that when a user or viewer clicks or selects a component or element in the three-dimensional model through somatosensory, gesture, etc., it can automatically recognize the name and other information of the component or component ( It can be identified by recognizing the corresponding pre-attached label, etc.).
  • the three-dimensional model of the object is a car. If the user clicks on the wheel of the car, the control device will automatically introduce to the user through text, sound, etc. that the wheel is currently clicked, and the corresponding position of the wheel in the imaging area can be displayed.
  • the name "wheels”.
  • Multi-national voice commentary refers to the three-dimensional model of the object introduced to the user in multiple languages (such as Chinese, English, French, etc.), the detailed information of a certain component or component in the model (such as history, parameters, sales, performance, characteristics and After-sales service, etc.). For example, if the three-dimensional model of the object is a car, the brand, model, price and other information of the car can be introduced to the user in Chinese. For example, after the user selects the wheel, the user can introduce the parameters and performance of the wheel by voice.
  • the multi-national voice commentary content can be pre-stored locally in the control device or on the Internet, cloud, back-end server, etc., and can be read or called when needed.
  • AI intelligent voice answering refers to automatically answering user questions.
  • the answers to the questions asked by the user can be pre-stored or entered locally in the control device, or directly stored in the cloud, back-end server, etc., for automatic reading when needed.
  • the user asks: "Which year was this car produced? What is the price/performance ratio?”
  • the control device first goes to the local to find out whether there is a corresponding answer, if there is a direct answer by voice, if not, it is from the cloud through the Internet , The back-end server, etc. will perform voice broadcast after obtaining the corresponding answer.
  • Automatic navigation means to automatically provide users with navigation information so that users can understand the details. For example, if the user is not familiar with the three-dimensional model of a certain museum, he can understand the general situation in the museum through automatic navigation, and can automatically introduce the information of various places in the museum one by one through voice and other methods (after introducing one place, then go to another Place, introduction or navigation switching sequence can be preset).
  • Scene switching refers to switching between 3D models of different scenes, such as switching from a 3D model of a museum scene to a scene of an automobile exhibition.
  • Mode switching refers to switching between different modes.
  • the mode switching can be the switching of the viewing angle, the color of the car body, the switching of the car interior, the switching of the wheel hub, and so on.
  • Model special effect control means to provide model special effects such as exploded view, perspective view, transformer special effect and fluid special effect of the three-dimensional model of the object, so that users can obtain more model information (such as internal parts and other information) and provide more dynamic effects.
  • model information such as internal parts and other information
  • the exploded view of the watch can be displayed to facilitate the user to understand the internal parts and structure of the watch.
  • the communication system includes:
  • Voice message unit used to provide voice message service
  • SMS message unit used to provide SMS message service
  • Telephone customer service unit used to provide telephone customer service
  • Robot customer service unit used to provide robot voice customer service
  • the live video customer service unit is used to provide live video customer service.
  • the present invention can also provide robot voice customer service (models pre-trained through self-learning) and video live customer service (obtained by scanning and modeling a real person). Model, customizable) service, the way is more flexible.
  • the big data analysis module includes:
  • New user statistics unit for counting the number of new users
  • User retention statistics unit for counting the number of retained users
  • Activity analysis unit used to analyze user activity
  • User information analysis unit used to analyze the user's gender, age, registration information, IP distribution and regional distribution;
  • the hot spot analysis unit is used to analyze the user's viewing hot spots and generate corresponding heat maps
  • a user viewing behavior analysis unit for performing user viewing behavior analysis.
  • the user viewing behavior analysis includes face recognition analysis, somatosensory action analysis, gesture analysis, eye tracking, mouse browsing trajectory analysis and video recording, voice analysis, and brain wave analysis At least one of
  • the information sharing unit is used to share and publish the analysis results of the big data analysis module.
  • the big data analysis module is mainly to analyze the behavior state of the user when viewing the three-dimensional model of the object displayed by the display device, so as to facilitate subsequent use (such as product push, etc.).
  • the information sharing unit can share and publish the results of big data analysis to existing general social media, such as WeChat, Weibo, and blogs.
  • a back-end server for communicating with the control device, remotely storing the three-dimensional model of the object, multi-national voice commentary content and AI intelligent voice answer content, and remotely controlling the display device through the control device Show content.
  • this embodiment also adds a back-end server, which can remotely control the display content of the display device, meets the personalized customization requirements of different users, and can remotely control the display device anytime and anywhere, realizing data cross-space Sharing is more convenient.
  • Controlling the display content of the display device is mainly to control the three-dimensional model of the object, such as zooming and color switching the three-dimensional model of the control object.
  • an embodiment of the present invention also provides an implementation method of an air imaging system, which includes the following steps:
  • the input signal includes at least one of a gesture signal, a somatosensory signal, a brain wave signal, an eye movement signal, a voice signal, a touch signal, and an image signal;
  • the step of detecting the input signal specifically includes:
  • the step of performing display control on the display device according to the input signal specifically includes:
  • the control signal is triggered according to at least one of the user's face recognition result, the user's somatosensory movement, the user's gesture, the user's eyeball movement, the user's touch instruction, the user's voice instruction, and the user's brain wave to intelligently perform the display device Control
  • the intelligent control includes intelligent recognition of the three-dimensional model of the object and multi-national voice commentary, AI intelligent voice answer, automatic navigation, scene switching, mode switching and model special effects control;
  • the 3D model and control signal of the object are stored locally.
  • the three-dimensional model of the object, multi-national voice commentary content and AI intelligent voice answer content are remotely stored through the back-end server, and the display content of the display device is remotely controlled.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种空气成像系统及其实现方法,系统包括展示装置、信号检测装置和控制装置,方法包括:通过展示装置在空气中通过空气成像的方式形成对象的三维模型;检测输入的信号,所述输入的信号包括手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号中的至少一个;根据输入的信号对展示装置进行展示控制。该系统及其实现方法不仅能通过空气成像的方式在空气中展示对象的三维模型这一实像,而且能结合手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号等与观众交互而产生的输入信号对对象的三维模型进行观看角度切换、颜色变换等控制操作,更加方便且功能更丰富,可广泛应用于成像技术领域。

Description

一种空气成像系统及其实现方法 技术领域
本发明涉及成像技术领域,尤其是一种空气成像系统及其实现方法。
背景技术
传统使用透明橱窗或橱柜展示新产品的方式已经陈旧过时,也失去了对观众吸引力。随着技术的不断更新和发展,各种新的展示手段层出不穷,空气成像也应运而生。空气成像技术通过在空气中形成物品的像,使得人们无需借助VR眼镜等辅助设备就可以看到物品的像,给人以强烈的视觉震撼效果,受到越来越多人的关注和追捧。然而,目前的空气成像技术大多只能在空气中形成物品的幻像,如同海市蜃楼一样,看得见但摸不到,无法通过与观众的交互来对物品的幻像进行观看角度切换、颜色变换等控制操作,不够方便且功能不够丰富。
发明内容
为解决上述技术问题,本发明的目的在于:提供一种方便且功能丰富的空气成像系统及其实现方法。
本发明一方面所采取的技术方案是:
一种空气成像系统,包括:
展示装置,用于在空气中通过空气成像的方式形成对象的三维模型;
信号检测装置,用于检测输入的信号,所述输入的信号包括手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号中的至少一个;
控制装置,用于根据输入的信号对展示装置进行展示控制。
进一步,所述信号检测装置包括:
体感传感器,用于获取输入的体感信号;
手势传感器,用于获取输入的手势信号;
眼球跟踪器,用于获取输入的眼球动作信号;
触摸模块,用于获取输入的触摸信号;
语音采集模块,用于获取输入的语音信号;
脑波采集装置,用于获取输入的脑波信号;
摄像头,用于获取输入的图像信号。
进一步,所述控制装置包括:
体感识别单元,用于根据获取的体感信号识别用户的体感动作;
手势识别单元,用于根据获取的手势信号识别用户的手势;
眼球动作单元,用于根据获取的眼球动作信号识别用户的眼球动作;
触摸信号识别单元,用于根据获取的触摸信号识别用户的触摸指令;
语音对讲单元,用于根据获取的语音信号识别用户的语音指令;
脑波信号识别单元,用于根据获取的脑波信号识别用户的脑波;
人脸识别单元,用于根据获取的图像信号进行用户人脸识别;
智能控制单元,用于根据用户人脸识别结果、用户的体感动作、用户的手势、用户的眼球动作、用户的触摸指令、用户的语音指令和用户的脑波中的至少一种触发控制信号,以对展示装置进行智能控制,所述智能控制包括对象的三维模型的智能识别与多国语音解说、AI智能语音解答、自动导览、场景切换、模式切换和模型特效控制;
存储单元,用于本地存储对象的三维模型和控制信号。
进一步,还包括通讯系统,所述通讯系统包括:
语音留言单元,用于提供语音留言服务;
短信留言单元,用于提供短信留言服务;
电话客服单元,用于提供电话客服服务;
机器人客服单元,用于提供机器人语音客服服务;
视频真人客服单元,用于提供视频真人客服服务。
进一步,还包括大数据分析模块,所述大数据分析模块包括:
新增用户统计单元,用于统计新增的用户数量;
用户留存统计单元,用于统计留存的用户数量;
活跃度分析单元,用于分析用户的活跃度;
用户信息分析单元,用于分析用户的性别、年龄、注册信息、IP分布和区域分布;
热点分析单元,用于分析用户的观看热点,并生成相应的热图;
用户观看行为分析单元,用于进行用户观看行为分析,所述用户观看行为分析包括人脸识别分析、体感动作分析、手势分析、眼球跟踪、鼠标浏览轨迹分析与视频录制、语音分析以及脑波分析中的至少一种;
信息共享单元,用于共享与发布大数据分析模块的分析结果。
进一步,还包括后台服务器,所述后台服务器用于与控制装置进行通信,远程存储对象的三维模型、多国语音解说内容和AI智能语音解答内容,通过控制装置远程控制 展示装置的展示内容。
本发明另一方面所采取的技术方案是:
一种空气成像系统的实现方法,包括以下步骤:
通过展示装置在空气中通过空气成像的方式形成对象的三维模型;
检测输入的信号,所述输入的信号包括手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号中的至少一个;
根据输入的信号对展示装置进行展示控制。
进一步,所述检测输入的信号这一步骤,具体包括:
获取输入的体感信号;
获取输入的手势信号;
获取输入的眼球动作信号;
获取输入的触摸信号;
获取输入的语音信号;
获取输入的脑波信号;
获取输入的图像信号。
进一步,所述根据输入的信号对展示装置进行展示控制这一步骤,具体包括:
根据获取的体感信号识别用户的体感动作;
根据获取的手势信号识别用户的手势;
根据获取的眼球动作信号识别用户的眼球动作;
根据获取的触摸信号识别用户的触摸指令;
根据获取的语音信号识别用户的语音指令;
根据获取的脑波信号识别用户的脑波;
根据获取的图像信号进行用户人脸识别;
根据用户人脸识别结果、用户的体感动作、用户的手势、用户的眼球动作、用户的触摸指令、用户的语音指令和用户的脑波中的至少一种触发控制信号,以对展示装置进行智能控制,所述智能控制包括对象的三维模型的智能识别与多国语音解说、AI智能语音解答、自动导览、场景切换、模式切换和模型特效控制;
本地存储对象的三维模型和控制信号。
进一步,还包括以下步骤:
通过后台服务器远程存储对象的三维模型、多国语音解说内容和AI智能语音解答 内容,并远程控制展示装置的展示内容。
本发明的有益效果是:本发明一种空气成像系统及其实现方法,利用展示装置在空气中通过空气成像的方式形成对象的三维模型,并根据输入的信号对展示装置进行展示控制,不仅能通过空气成像的方式在空气中展示对象的三维模型这一实像,而且能结合手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号等与观众交互而产生的输入信号对对象的三维模型进行观看角度切换、颜色变换等控制操作,更加方便且功能更丰富。
附图说明
图1为本发明实施例提供的空气成像系统的结构框图;
图2为本发明实施例控制装置的结构框图;
图3为本发明实施例提供的空气成像系统的实现方法流程图。
具体实施方式
以下将结合实施例和附图对本发明的构思、具体结构及产生的技术效果进行清楚、完整的描述,以充分地理解本发明的目的、方案和效果。
需要说明的是,如无特殊说明,当某一特征被称为“固定”、“连接”在另一个特征,它可以直接固定、连接在另一个特征上,也可以间接地固定、连接在另一个特征上。此外,本公开中所使用的上、下、左、右等描述仅仅是相对于附图中本公开各组成部分的相互位置关系来说的。在本公开中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。此外,除非另有定义,本文所使用的所有的技术和科学术语与本技术领域的技术人员通常理解的含义相同。本文说明书中所使用的术语只是为了描述具体的实施例,而不是为了限制本发明。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种元件,但这些元件不应限于这些术语。这些术语仅用来将同一类型的元件彼此区分开。例如,在不脱离本公开范围的情况下,第一元件也可以被称为第二元件,类似地,第二元件也可以被称为第一元件。本文所提供的任何以及所有实例或示例性语言(“例如”、“如”等)的使用仅意图更好地说明本发明的实施例,并且除非另外要求,否则不会对本发明的范围施加限制。
如图1所示,本发明实施例提供了一种空气成像系统,包括:
展示装置,用于在空气中通过空气成像的方式形成对象的三维模型;
信号检测装置,用于检测输入的信号,所述输入的信号包括手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号中的至少一个;
控制装置,用于根据输入的信号对展示装置进行展示控制。
具体地,对象包括物品(如商品)和环境(如博物馆的室内环境)等。对象的三维模型可以是通过各手动或自动的扫描设备(如相机、航拍无人机、自动扫描机器人等)对对象的三维数据(二维的图像、对象的点云数据)采集,然后发送到云端或后台服务器进行修复修饰、渲染和优化等处理而预先得到的真实三维模型。对象的三维模型可供用户进行360度无死角的浏览或观看。
展示装置则将对象的三维模型投射到空气中的成像区域(一般可预先设置或调节),完全是在空气中成像,不需要任何投射屏幕。为了提升展示效果和便于进行后续的展示控制,展示装置可以利用无介质空气成像技术(如衍射光学成像技术),通过光学系统将对象的三维模型投射在空气中来形成实像而非“幻像”。
信号检测装置,主要用于获取用户对对象的三维模型的交互操作信号。信号检测装置可设置在展示装置的本体上,或位于展示装置空气中的成像区域内,或空气中其他可被检测到的位置。
控制装置根据输入的信号对展示装置进行展示控制具体包括:对象的三维模型的观看角度切换、颜色变换、缩放切换等,以及展示对象的三维模型的爆炸图、透视图、变形金刚特效和流体特效等。所述控制装置分别与信号检测装置和展示装置之间可以通过有线或者无线的方式传输数据,包括但不限于HDMI、VGA、USB、WIFI、蓝牙和红外信号等连接方式。所述控制装置可以是软件、硬件、固件及其组合。例如,控制装置可以是但不限于智能电话、平板电脑、笔记本电脑、智能手表、计算机、工控机等。
由上述内容可见,本实施例利用了展示装置在空气中通过空气成像的方式形成对象的三维模型,并根据输入的信号对展示装置进行展示控制,不仅能通过空气成像的方式在空气中展示对象的三维模型这一实像,而且能结合手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号等与观众交互而产生的输入信号对对象的三维模型进行观看角度切换、颜色变换等控制操作,更加方便且功能更丰富。
参照图1,进一步作为优选的实施方式,所述信号检测装置包括:
体感传感器,用于获取输入的体感信号;
手势传感器,用于获取输入的手势信号;
眼球跟踪器,用于获取输入的眼球动作信号;
触摸模块,用于获取输入的触摸信号;
语音采集模块,用于获取输入的语音信号;
脑波采集装置,用于获取输入的脑波信号;
摄像头,用于获取输入的图像信号。
具体地,体感传感器,用于捕捉用户的体感信号(如用户的摇摆、左右晃动等肢体动作信号)。
手势传感器,用于捕捉用户的手势信号(如用户的手掌拇指动作信号等)。
眼球跟踪器,用于捕捉用户的眼部动作。眼球跟踪的目的有2个:1个是根据眼球的关注点自动控制展示装置进行展示内容切换,例如,用户的眼球在具有多个三维模型中的某一个三维模型停留或凝视了数秒,则可以控制展示装置自动切换到该三维模型的场景进行进一步细节展示;2、对用户的眼球动作进行实时监控。
触摸模块,用于用户通过触摸的方式输入触摸指令。例如,可在展示对象的三维模型的区域内通过虚拟的触摸按钮(也可通过空气成像技术生成),或者通过在展示装置上设置触摸屏,来获取用户触摸信号。
语音采集模块,用于采集用户的语音信号。以对象为汽车的三维模型为例,用户的语音信号可以是“打开车门”、“打开车内空调”、“将车身颜色换成黑色”等用户的语音指令信号。优选地,语音采集模块可以是拾音器、麦克风等语音采集设备。
脑波采集装置,用于采集用户的脑波信号,以便于识别用户的意念或想法,从而控制展示装置,以对对象的三维模型进行相应的操作,如颜色切换、视觉切换等等。
摄像头,用于捕捉用户的图像。优选地,所述摄像头可采用能同时采集二维人脸图像信息和深度信息的RGB-D摄像头,以获取更加精确的用户图像。
参照图2,进一步作为优选的实施方式,所述控制装置包括:
体感识别单元,用于根据获取的体感信号识别用户的体感动作;
手势识别单元,用于根据获取的手势信号识别用户的手势;
眼球动作单元,用于根据获取的眼球动作信号识别用户的眼球动作;
触摸信号识别单元,用于根据获取的触摸信号识别用户的触摸指令;
语音对讲单元,用于根据获取的语音信号识别用户的语音指令;
脑波信号识别单元,用于根据获取的脑波信号识别用户的脑波;
人脸识别单元,用于根据获取的图像信号进行用户人脸识别(包括用户身份识别和面部表情识别等);
智能控制单元,用于根据用户人脸识别结果、用户的体感动作、用户的手势、用户的眼球动作、用户的触摸指令、用户的语音指令和用户的脑波中的至少一种触发控制信号,以对展示装置进行智能控制,所述智能控制包括对象的三维模型的智能识别与多国语音解说、AI智能语音解答、自动导览、场景切换、模式切换和模型特效控制;
存储单元,用于本地存储对象的三维模型和控制信号。
具体地,识别用户的眼球动作主要是识别观看者关注的焦点所在(在看到关注的部分时,用户的眼球会与未看到关注部分的情况不同,如眼球停留的时间不同),以便于控制装置控制展示装置将对象的三维模型切换至观看者关注的焦点或细节(如识别到用户关注汽车的车身,则直接切换至车身的三维模型)。
人脸识别,主要是用于识别观看者的身份及脸部表情,并可通过脸部表情识别观看者当前的心情(如开心、不开心等)。
对象的三维模型的智能识别,是指当用户或观众通过体感、手势等方式点击或选中该三维模型内的某个组成部分或元件时,能自动识别出该组成部分或元件的名称等信息(可通过识别预先贴的对应标签等方式来识别)。例如,对象的三维模型为汽车,若用户点击汽车的轮子,则控制装置会自动通过文字、声音等方式为用户介绍当前点击的是轮子,并可在成像区域内轮子的相应位置处展示对应的名称“轮子”。
多国语音解说,是指通过多国语言(如中文、英文、法文等)为用户介绍对象的三维模型、模型内的某个组成部分或元件的详细信息(如历史、参数、销量、性能、特点和售后服务等)。例如,对象的三维模型为汽车,则可通过中文为用户介绍该汽车的品牌、型号、价格等信息,又如在用户选中轮子后为用户语音介绍轮子的参数、性能等。多国语音解说内容可以预先存储在控制装置本地或互联网、云端、后台服务器等,需要时读取或调用即可。
AI智能语音解答,是指是指针对用户的提问自动进行解答。用户提问的问题的答案可预先存储或录入控制装置本地,或直接存储在云端、后台服务器等,供需要时自动读取。例如,用户提问:“这款车是那一年生产的?性价比如何?”,控制装置先去本地查找是否有对应的答案,如有则直接通过语音进行解答,如无,则通过互联网从云端、后台服务器等获取对应的答案后进行语音播报。
自动导览,是指自动为用户提供导览信息,便于用户了解详细情况。例如,用户对某个博物馆的三维模型不熟悉,可通过自动导览来了解该博物馆内的大体情况,并可以自动通过语音等方式逐个介绍博物馆各个地方的信息(介绍完一个地方再到另一个地 方,介绍或导览切换顺序可以预先设定)。
场景切换,是指不同场景的三维模型之间的切换,如由博物馆场景的三维模型切换到汽车展会的场景。
模式切换,是指不同模式间的切换。例如,以对象的三维模型为汽车的三维模型为例,模式切换可以是视角的切换、车身颜色的切换,车内饰的切换、车轮毂的切换等。
模型特效控制,是指提供对象的三维模型的爆炸图、透视图、变形金刚特效和流体特效等模型特效,方便用户获取更多的模型信息(如内部零件等信息)和提供更加动感的效果。例如,对象的三维模型为手表时,可以通过展示手表的爆炸图来便于用户了解手表的内部零件和构造。
参照图1,进一步作为优选的实施方式,还包括通讯系统,所述通讯系统包括:
语音留言单元,用于提供语音留言服务;
短信留言单元,用于提供短信留言服务;
电话客服单元,用于提供电话客服服务;
机器人客服单元,用于提供机器人语音客服服务;
视频真人客服单元,用于提供视频真人客服服务。
具体地,本发明除了提供传统的语音留言、短信留言、电话客服服务外,还可提供机器人语音客服(通过自学习的方式预先训练的模型)和视频真人客服(对真人扫描并建模得到的模型,可定制)服务,方式更加灵活。
进一步作为优选的实施方式,还包括大数据分析模块,所述大数据分析模块包括:
新增用户统计单元,用于统计新增的用户数量;
用户留存统计单元,用于统计留存的用户数量;
活跃度分析单元,用于分析用户的活跃度;
用户信息分析单元,用于分析用户的性别、年龄、注册信息、IP分布和区域分布;
热点分析单元,用于分析用户的观看热点,并生成相应的热图;
用户观看行为分析单元,用于进行用户观看行为分析,所述用户观看行为分析包括人脸识别分析、体感动作分析、手势分析、眼球跟踪、鼠标浏览轨迹分析与视频录制、语音分析以及脑波分析中的至少一种;
信息共享单元,用于共享与发布大数据分析模块的分析结果。
具体地,大数据分析模块主要是为了分析用户在观看展示装置展示的对象的三维模型时的行为状态,以便于后续进行使用(如进行商品推送等)。
信息共享单元,可以将大数据分析的结果共享与发布至现有通用的社交媒体,如微信、微博、博客等。
进一步作为优选的实施方式,还包括后台服务器,所述后台服务器用于与控制装置进行通信,远程存储对象的三维模型、多国语音解说内容和AI智能语音解答内容,通过控制装置远程控制展示装置的展示内容。
具体地,本实施例还增设了后台服务器,能对展示装置的展示内容进行远程控制,满足了不同用户的个性化定制要求,且能随时随地对展示装置进行远程控制,实现了数据的跨空间共享,更加方便。
控制展示装置的展示内容,主要是控制对象的三维模型,如控制对象的三维模型进行缩放、颜色切换等操作。
参照图3,本发明实施例还提供了一种空气成像系统的实现方法,包括以下步骤:
通过展示装置在空气中通过空气成像的方式形成对象的三维模型;
检测输入的信号,所述输入的信号包括手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号中的至少一个;
根据输入的信号对展示装置进行展示控制。
进一步作为优选的实施方式,所述检测输入的信号这一步骤,具体包括:
获取输入的体感信号;
获取输入的手势信号;
获取输入的眼球动作信号;
获取输入的触摸信号;
获取输入的语音信号;
获取输入的脑波信号;
获取输入的图像信号。
进一步作为优选的实施方式,所述根据输入的信号对展示装置进行展示控制这一步骤,具体包括:
根据获取的体感信号识别用户的体感动作;
根据获取的手势信号识别用户的手势;
根据获取的眼球动作信号识别用户的眼球动作;
根据获取的触摸信号识别用户的触摸指令;
根据获取的语音信号识别用户的语音指令;
根据获取的脑波信号识别用户的脑波;
根据获取的图像信号进行用户人脸识别;
根据用户人脸识别结果、用户的体感动作、用户的手势、用户的眼球动作、用户的触摸指令、用户的语音指令和用户的脑波中的至少一种触发控制信号,以对展示装置进行智能控制,所述智能控制包括对象的三维模型的智能识别与多国语音解说、AI智能语音解答、自动导览、场景切换、模式切换和模型特效控制;
本地存储对象的三维模型和控制信号。
进一步作为优选的实施方式,还包括以下步骤:
通过后台服务器远程存储对象的三维模型、多国语音解说内容和AI智能语音解答内容,并远程控制展示装置的展示内容。
上述系统实施例中的内容均适用于本方法实施例中,本方法实施例所具体实现的功能与上述系统实施例相同,并且达到的有益效果与上述系统实施例所达到的有益效果也相同。
以上是对本发明的较佳实施进行了具体说明,但本发明并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。

Claims (10)

  1. 一种空气成像系统,其特征在于:包括:
    展示装置,用于在空气中通过空气成像的方式形成对象的三维模型;
    信号检测装置,用于检测输入的信号,所述输入的信号包括手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号中的至少一个;
    控制装置,用于根据输入的信号对展示装置进行展示控制。
  2. 根据权利要求1所述的一种空气成像系统,其特征在于:所述信号检测装置包括:
    体感传感器,用于获取输入的体感信号;
    手势传感器,用于获取输入的手势信号;
    眼球跟踪器,用于获取输入的眼球动作信号;
    触摸模块,用于获取输入的触摸信号;
    语音采集模块,用于获取输入的语音信号;
    脑波采集装置,用于获取输入的脑波信号;
    摄像头,用于获取输入的图像信号。
  3. 根据权利要求2所述的一种空气成像系统,其特征在于:所述控制装置包括:
    体感识别单元,用于根据获取的体感信号识别用户的体感动作;
    手势识别单元,用于根据获取的手势信号识别用户的手势;
    眼球动作单元,用于根据获取的眼球动作信号识别用户的眼球动作;
    触摸信号识别单元,用于根据获取的触摸信号识别用户的触摸指令;
    语音对讲单元,用于根据获取的语音信号识别用户的语音指令;
    脑波信号识别单元,用于根据获取的脑波信号识别用户的脑波;
    人脸识别单元,用于根据获取的图像信号进行用户人脸识别;
    智能控制单元,用于根据用户人脸识别结果、用户的体感动作、用户的手势、用户的眼球动作、用户的触摸指令、用户的语音指令和用户的脑波中的至少一种触发控制信号,以对展示装置进行智能控制,所述智能控制包括对象的三维模型的智能识别与多国语音解说、AI智能语音解答、自动导览、场景切换、模式切换和模型特效控制;
    存储单元,用于本地存储对象的三维模型和控制信号。
  4. 根据权利要求1所述的一种空气成像系统,其特征在于:还包括通讯系统,所述通讯系统包括:
    语音留言单元,用于提供语音留言服务;
    短信留言单元,用于提供短信留言服务;
    电话客服单元,用于提供电话客服服务;
    机器人客服单元,用于提供机器人语音客服服务;
    视频真人客服单元,用于提供视频真人客服服务。
  5. 根据权利要求1所述的一种空气成像系统,其特征在于:还包括大数据分析模块,所述大数据分析模块包括:
    新增用户统计单元,用于统计新增的用户数量;
    用户留存统计单元,用于统计留存的用户数量;
    活跃度分析单元,用于分析用户的活跃度;
    用户信息分析单元,用于分析用户的性别、年龄、注册信息、IP分布和区域分布;
    热点分析单元,用于分析用户的观看热点,并生成相应的热图;
    用户观看行为分析单元,用于进行用户观看行为分析,所述用户观看行为分析包括人脸识别分析、体感动作分析、手势分析、眼球跟踪、鼠标浏览轨迹分析与视频录制、语音分析以及脑波分析中的至少一种;
    信息共享单元,用于共享与发布大数据分析模块的分析结果。
  6. 根据权利要求3所述的一种空气成像系统,其特征在于:还包括后台服务器,所述后台服务器用于与控制装置进行通信,远程存储对象的三维模型、多国语音解说内容和AI智能语音解答内容,通过控制装置远程控制展示装置的展示内容。
  7. 一种空气成像系统的实现方法,其特征在于:包括以下步骤:
    通过展示装置在空气中通过空气成像的方式形成对象的三维模型;
    检测输入的信号,所述输入的信号包括手势信号、体感信号、脑波信号、眼球动作信号、语音信号、触摸信号和图像信号中的至少一个;
    根据输入的信号对展示装置进行展示控制。
  8. 根据权利要求7所述的一种空气成像系统的实现方法,其特征在于:所述检测输入的信号这一步骤,具体包括:
    获取输入的体感信号;
    获取输入的手势信号;
    获取输入的眼球动作信号;
    获取输入的触摸信号;
    获取输入的语音信号;
    获取输入的脑波信号;
    获取输入的图像信号。
  9. 根据权利要求8所述的一种空气成像系统的实现方法,其特征在于:所述根据输入的信号对展示装置进行展示控制这一步骤,具体包括:
    根据获取的体感信号识别用户的体感动作;
    根据获取的手势信号识别用户的手势;
    根据获取的眼球动作信号识别用户的眼球动作;
    根据获取的触摸信号识别用户的触摸指令;
    根据获取的语音信号识别用户的语音指令;
    根据获取的脑波信号识别用户的脑波;
    根据获取的图像信号进行用户人脸识别;
    根据用户人脸识别结果、用户的体感动作、用户的手势、用户的眼球动作、用户的触摸指令、用户的语音指令和用户的脑波中的至少一种触发控制信号,以对展示装置进行智能控制,所述智能控制包括对象的三维模型的智能识别与多国语音解说、AI智能语音解答、自动导览、场景切换、模式切换和模型特效控制;
    本地存储对象的三维模型和控制信号。
  10. 根据权利要求7所述的一种空气成像系统的实现方法,其特征在于:还包括以下步骤:
    通过后台服务器远程存储对象的三维模型、多国语音解说内容和AI智能语音解答内容,并远程控制展示装置的展示内容。
PCT/CN2019/126949 2019-01-23 2019-12-20 一种空气成像系统及其实现方法 WO2020151430A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910064301.3 2019-01-23
CN201910064301.3A CN109947239A (zh) 2019-01-23 2019-01-23 一种空气成像系统及其实现方法

Publications (1)

Publication Number Publication Date
WO2020151430A1 true WO2020151430A1 (zh) 2020-07-30

Family

ID=67006662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126949 WO2020151430A1 (zh) 2019-01-23 2019-12-20 一种空气成像系统及其实现方法

Country Status (2)

Country Link
CN (1) CN109947239A (zh)
WO (1) WO2020151430A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947239A (zh) * 2019-01-23 2019-06-28 广东康云科技有限公司 一种空气成像系统及其实现方法
CN111402885A (zh) * 2020-04-22 2020-07-10 北京万向新元科技有限公司 一种基于语音和空气成像技术的交互方法及其系统
CN113838395A (zh) * 2021-09-18 2021-12-24 湖南美景创意文化建设有限公司 一种博物馆用超高对比度无介质空中成像显示屏

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789348A (zh) * 2011-05-18 2012-11-21 北京东方艾迪普科技发展有限公司 交互式三维图形视频可视化系统
US20130278600A1 (en) * 2012-04-18 2013-10-24 Per Bloksgaard Christensen Rendering interactive photorealistic 3d model representations
US20130328764A1 (en) * 2012-06-11 2013-12-12 Samsung Electronics Co., Ltd. Flexible display apparatus and control method thereof
CN108255292A (zh) * 2017-12-06 2018-07-06 上海永微信息科技有限公司 空气成像互动系统、方法、控制设备及存储介质
CN108573403A (zh) * 2018-03-20 2018-09-25 广东康云多维视觉智能科技有限公司 一种多维视觉导购系统和方法
CN109085966A (zh) * 2018-06-15 2018-12-25 广东康云多维视觉智能科技有限公司 一种基于云计算的三维展示系统及方法
CN109947239A (zh) * 2019-01-23 2019-06-28 广东康云科技有限公司 一种空气成像系统及其实现方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103163720A (zh) * 2011-12-16 2013-06-19 胡宗甫 一种无介质无屏幕空中激光干涉多维三d成像系统
CN104138167A (zh) * 2013-05-06 2014-11-12 苏州金螳螂展览设计工程有限公司 一种无介质透明成像展柜
CN107703726A (zh) * 2017-11-15 2018-02-16 深圳盈天下视觉科技有限公司 一种空中显示系统及空中显示方法
CN108732747A (zh) * 2018-06-01 2018-11-02 像航(上海)科技有限公司 一种二代无一次成像空中无介质成像成像系统及方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789348A (zh) * 2011-05-18 2012-11-21 北京东方艾迪普科技发展有限公司 交互式三维图形视频可视化系统
US20130278600A1 (en) * 2012-04-18 2013-10-24 Per Bloksgaard Christensen Rendering interactive photorealistic 3d model representations
US20130328764A1 (en) * 2012-06-11 2013-12-12 Samsung Electronics Co., Ltd. Flexible display apparatus and control method thereof
CN108255292A (zh) * 2017-12-06 2018-07-06 上海永微信息科技有限公司 空气成像互动系统、方法、控制设备及存储介质
CN108573403A (zh) * 2018-03-20 2018-09-25 广东康云多维视觉智能科技有限公司 一种多维视觉导购系统和方法
CN109085966A (zh) * 2018-06-15 2018-12-25 广东康云多维视觉智能科技有限公司 一种基于云计算的三维展示系统及方法
CN109947239A (zh) * 2019-01-23 2019-06-28 广东康云科技有限公司 一种空气成像系统及其实现方法

Also Published As

Publication number Publication date
CN109947239A (zh) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109085966B (zh) 一种基于云计算的三维展示系统及方法
CN112352209B (zh) 用于与人工智能系统互动和界面交互的系统和方法
US10163111B2 (en) Virtual photorealistic digital actor system for remote service of customers
US10817760B2 (en) Associating semantic identifiers with objects
US11126257B2 (en) System and method for detecting human gaze and gesture in unconstrained environments
US9563272B2 (en) Gaze assisted object recognition
US11397462B2 (en) Real-time human-machine collaboration using big data driven augmented reality technologies
CN109074117B (zh) 提供基于情绪的认知助理系统、方法及计算器可读取媒体
US20170185276A1 (en) Method for electronic device to control object and electronic device
WO2020151430A1 (zh) 一种空气成像系统及其实现方法
CN109176535B (zh) 基于智能机器人的交互方法及系统
JP7254772B2 (ja) ロボットインタラクションのための方法及びデバイス
US9870058B2 (en) Control of a real world object user interface
Varona et al. Hands-free vision-based interface for computer accessibility
CN111448568B (zh) 基于环境的应用演示
WO2020151431A1 (zh) 一种智能看车的数据处理方法及系统
CN113835522A (zh) 手语视频生成、翻译、客服方法、设备和可读介质
WO2020151255A1 (zh) 一种基于移动终端的展示控制系统及方法
TW202127319A (zh) 狀態識別方法、裝置、電子設備及電腦可讀儲存介質
CN106710490A (zh) 一种橱窗系统及其实施方法
Schiele et al. Sensory-augmented computing: Wearing the museum's guide
CN113851029B (zh) 一种无障碍通信方法和装置
CN116520982B (zh) 一种基于多模态数据的虚拟人物切换方法及系统
TW201351308A (zh) 非接觸式醫療導覽系統及其控制方法
Sparacino Natural interaction in intelligent spaces: Designing for architecture and entertainment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911973

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19911973

Country of ref document: EP

Kind code of ref document: A1