TW201942714A - Information output method, apparatus and system - Google Patents

Information output method, apparatus and system Download PDF

Info

Publication number
TW201942714A
TW201942714A TW108109579A TW108109579A TW201942714A TW 201942714 A TW201942714 A TW 201942714A TW 108109579 A TW108109579 A TW 108109579A TW 108109579 A TW108109579 A TW 108109579A TW 201942714 A TW201942714 A TW 201942714A
Authority
TW
Taiwan
Prior art keywords
information
eye
interactive device
gaze point
interactive
Prior art date
Application number
TW108109579A
Other languages
Chinese (zh)
Inventor
趙海杰
黃通兵
王雲飛
秦林嬋
Original Assignee
大陸商北京七鑫易維信息技術有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商北京七鑫易維信息技術有限公司 filed Critical 大陸商北京七鑫易維信息技術有限公司
Publication of TW201942714A publication Critical patent/TW201942714A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Abstract

The application discloses an information output method, apparatus and system. The method includes the steps as follows: acquiring an eye image of an object by an interactive device; determining the position of a fixation point of the object according to the eye image by the interactive device; and outputting an interactive information according to the position of the fixation point and an object information of the object by the interactive device. According to the information output method, apparatus and system, the technical problem that the fixation point cannot be accurately positioned in man-machine interaction of the interactive device is solved.

Description

輸出信息的方法、裝置和系統Method, device and system for outputting information

本發明涉及人機交互領域,具體而言,涉及一種輸出信息的方法、裝置和系統。The present invention relates to the field of human-computer interaction, and in particular, to a method, a device, and a system for outputting information.

目前,機器人已在各行各業中得到了廣泛的應用,例如,在餐飲行業,機器人可通過人機交互完成為用戶點餐、送餐的動作。在現有技術中,機器人在與人進行交互時,通常採用人臉識別技術來預估眼睛的位置,例如,默認人臉上部三分之一的位置為眼睛所在的位置,最後再將該位置默認為眼睛的實際注視點位置,以此調整機器人的眼球,以使機器人的眼球與人的眼睛對視。At present, robots have been widely used in various industries. For example, in the catering industry, robots can complete the actions of ordering and delivering meals for users through human-computer interaction. In the prior art, when a robot interacts with people, it usually uses face recognition technology to estimate the position of the eyes. For example, by default, one third of the position on the face of the human face is the position of the eyes, and finally the position is silent. The position of the eye's actual fixation point is considered to adjust the robot's eyeball so that the robot's eyeball and the human eye face each other.

然而,由於人與人之間的個體差異較大,因此,採用人臉識別的方法對人眼位置進行預判並將其默認為眼睛的實際注視點位置會產生較大的誤差。例如,經常發生的是人眼並未實際注視機器人,而是人臉落入了機器人進行人臉識別的範圍並相應觸發了機器人與人的交互,由此產生交互誤判。另外,當人與機器人有交互意圖,但人與機器人並不實際對視時,機器人無法通過人臉識別的方法來確認人所注視的位置,進而無法完成機器人與人的交互,由此產生交互遺漏。However, due to the large individual differences between people, the use of face recognition to predict the position of the human eye and defaulting it to the actual gaze point position of the eye will cause a large error. For example, it often happens that the human eye does not actually look at the robot, but the human face falls into the scope of the robot's face recognition and triggers the interaction between the robot and the person accordingly, resulting in misinterpretation of the interaction. In addition, when a person and a robot have an intention to interact, but the person and the robot are not actually facing each other, the robot cannot confirm the position of the person's gaze through the method of face recognition, and thus cannot complete the interaction between the robot and the person, thereby generating interaction. Missing.

針對上述交互設備人機交互中,無法精確確定位注視點現有的人臉識別技術無法準確識別人眼注視點位置的問題,目前尚未提出有效的解決方案。Aiming at the problem that in the human-machine interaction of the above-mentioned interactive device, the position fixation point cannot be accurately determined, the existing face recognition technology cannot accurately identify the position of the fixation point of the human eye, and no effective solution has been proposed at present.

有鑑於此,吾等發明人乃潛心進一步研究,並著手進行研發及改良,期以一較佳設作以解決上述問題,且在經過不斷試驗及修改後而有本發明之問世。In view of this, our inventors are concentrating on further research and proceeding with research and development and improvement, with a better design to solve the above problems, and the invention came out after continuous testing and modification.

本發明實施例提供了一種輸出信息的方法、裝置和系統,以至少解決交互設備人機交互中,無法精準定位注視點的技術問題。The embodiments of the present invention provide a method, an apparatus, and a system for outputting information, so as to at least solve the technical problem that the gaze point cannot be accurately positioned in the human-computer interaction of the interactive device.

根據本發明實施例的一個方面,提供了一種輸出信息的方法,包括:交互設備獲取對象的眼部圖像;交互設備根據眼部圖像確定對象的注視點位置;交互設備根據注視點位置以及對象的對象信息輸出交互信息。According to an aspect of the embodiment of the present invention, a method for outputting information is provided, which includes: an interactive device acquires an eye image of an object; the interactive device determines a position of a fixation point of the object according to the eye image; The object information of the object outputs interactive information.

根據本發明實施例的另一方面,還提供了一種輸出信息的系統,包括:圖像採集設備,用於獲取對象的眼部圖像;處理器,與圖像採集設備連接,用於根據眼部圖像確定對象的注視點位置,並根據注視點位置以及對象的對象信息輸出交互信息。According to another aspect of the embodiments of the present invention, a system for outputting information is further provided, including: an image acquisition device for acquiring an eye image of an object; and a processor connected to the image acquisition device for The external image determines the position of the fixation point of the object, and outputs interactive information according to the position of the fixation point and the object information of the object.

根據本發明實施例的另一方面,還提供了一種輸出信息的裝置,包括:獲取模塊,用於交互設備獲取對象的眼部圖像;確定模塊,用於交互設備根據眼部圖像確定對象的注視點位置;輸出模塊,用於交互設備根據注視點位置以及對象的對象信息輸出交互信息。According to another aspect of the embodiments of the present invention, a device for outputting information is further provided, which includes: an acquisition module for an interactive device to acquire an eye image of an object; and a determination module for an interactive device to determine an object based on the eye image The position of the gaze point; an output module for the interactive device to output the interaction information according to the position of the gaze point and the object information of the object.

根據本發明實施例的另一方面,還提供了一種存儲介質,該存儲介質包括存儲的程序,其中,程序執行輸出信息的方法。According to another aspect of the embodiments of the present invention, a storage medium is also provided. The storage medium includes a stored program, where the program executes a method of outputting information.

根據本發明實施例的另一方面,還提供了一種處理器,該處理器用於運行程序,其中,程序運行輸出信息的方法。According to another aspect of the embodiments of the present invention, there is also provided a processor for running a program, wherein the program runs a method of outputting information.

在本發明實施例中,採用眼球追蹤技術與對象信息相結合的方式,通過獲取對象的眼部圖像,然後根據眼部圖像確定對象的注視點位置,最後根據注視點位置以及對象的對象信息輸出交互信息,達到了準確識別對象的注視點位置的目的,從而實現了交互設備與對象準確交互的技術效果,進而解決了交互設備人機交互中,無法精準定位注視點的技術問題。In the embodiment of the present invention, a combination of eye tracking technology and object information is used to obtain an eye image of the object, then determine the position of the gaze point of the object according to the eye image, and finally according to the position of the gaze point and the object of the object The output of interactive information achieves the purpose of accurately identifying the position of the gaze point of the object, thereby realizing the technical effect of the accurate interaction between the interactive device and the object, and further solving the technical problem that the gaze point cannot be accurately positioned in the interactive device-human interaction.

關於吾等發明人之技術手段,茲舉數種較佳實施例配合圖式於下文進行詳細說明,俾供 鈞上深入瞭解並認同本發明。Regarding the technical means of our inventors, several preferred embodiments are described in detail below in conjunction with the drawings, for the purpose of understanding and agreeing with the present invention.

為了使本發明所屬技術領域中具有通常知識者更好地理解本發明方案,下面將結合本發明實施例中的附圖,對本發明實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本發明一部分的實施例,而不是全部的實施例。基於本發明中的實施例,本發明所屬技術領域中具有通常知識者在沒有做出進步性勞動前提下所獲得的所有其他實施例,都應當屬於本發明保護的範圍。In order to enable those with ordinary knowledge in the technical field to better understand the solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. The described embodiments are only a part of embodiments of the present invention, but not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons with ordinary knowledge in the technical field to which the present invention belongs without making progressive labor should fall within the protection scope of the present invention.

需要說明的是,本發明的說明書和申請專利範圍及上述附圖中的術語“第一”、“第二”等是用於區別類似的對象,而不必用於描述特定的順序或先後次序。應該理解這樣使用的數據在適當情況下可以互換,以便這裏描述的本發明的實施例能夠以除了在這裏圖示或描述的那些以外的順序實施。此外,術語“包括”和“具有”以及他們的任何變形,意圖在於覆蓋不排他的包含,例如,包含了一系列步驟或單元的過程、方法、系統、產品或設備不必限於清楚地列出的那些步驟或單元,而是可包括沒有清楚地列出的或對於這些過程、方法、產品或設備固有的其它步驟或單元。It should be noted that the terms “first” and “second” in the scope of the specification and patent application of the present invention and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data so used may be interchanged where appropriate so that the embodiments of the invention described herein can be implemented in an order other than those illustrated or described herein. Furthermore, the terms "including" and "having" and any of their variations are intended to cover non-exclusive inclusions, for example, a process, method, system, product, or device that includes a series of steps or units need not be limited to those explicitly listed Those steps or units may instead include other steps or units not explicitly listed or inherent to these processes, methods, products or equipment.

實施例Examples 11

根據本發明實施例,提供了一種輸出信息的方法實施例,需要說明的是,在附圖的流程圖示出的步驟可以在諸如一組計算機可執行指令的計算機系統中執行,並且,雖然在流程圖中示出了邏輯順序,但是在某些情況下,可以以不同於此處的順序執行所示出或描述的步驟。According to an embodiment of the present invention, an embodiment of a method for outputting information is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions. The logical order is shown in the flowchart, but in some cases, the steps shown or described may be performed in a different order than here.

圖1是根據本發明實施例的輸出信息的方法流程圖,如圖1所示,該方法包括如下步驟:FIG. 1 is a flowchart of a method for outputting information according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:

步驟S102,交互設備獲取對象的眼部圖像。Step S102: The interactive device acquires an eye image of the subject.

需要說明的是,上述交互設備包括機器人,其中,上述交互設備為能夠進行人機交互的任意設備,例如,酒店中的服務類機器人、在家中清潔衛生的機器人等。其中,機器人包括一個或多個圖像採集單元,圖像採集單元可採集到對象的眼部圖像。其中,圖像採集單元可以為但不限於相機,並且,在圖像採集單元周圍具有紅外光源,其中,紅外光源的數量、亮度以及安裝位置可根據實際情況進行設置。另外,在圖像採集單元為多個的情況下,多個圖像採集單元位於機器人的不同位置上,例如,多個圖像採集單元分別位於機器人的頭部、腰部以及腳部,以使得對象在注視機器人時,機器人可採集到對象的眼部圖像。It should be noted that the above-mentioned interactive device includes a robot, wherein the above-mentioned interactive device is any device capable of human-computer interaction, for example, a service robot in a hotel, a robot for cleaning and sanitation at home, and the like. The robot includes one or more image acquisition units, and the image acquisition unit can acquire an eye image of the object. The image acquisition unit may be, but is not limited to, a camera, and there are infrared light sources around the image acquisition unit. The number, brightness, and installation position of the infrared light sources may be set according to actual conditions. In addition, when there are multiple image acquisition units, the multiple image acquisition units are located at different positions of the robot, for example, the multiple image acquisition units are respectively located at the head, waist, and feet of the robot so that the object While looking at the robot, the robot can capture images of the subject's eyes.

步驟S104,交互設備根據眼部圖像確定對象的注視點位置。Step S104: The interactive device determines the position of the gaze point of the object according to the eye image.

在一種可選的實施例中,機器人的眼部具有圖像採集單元,當對象與機器人對視時,機器人眼部的圖像採集單元可獲取到對象的眼部圖像,同時機器人對該眼部圖像進行處理,得到對象的瞳孔中心位置、紅外光源所形成的光斑的中心位置、上下眼皮的位置以及虹膜數據等信息,進而對上述信息進行分析得到對象的注視點位置。In an optional embodiment, the eye of the robot has an image acquisition unit. When the object faces the robot, the image acquisition unit of the robot's eye can acquire the image of the eye of the object, and the robot simultaneously The image is processed to obtain information such as the center position of the pupil of the subject, the center position of the spot formed by the infrared light source, the position of the upper and lower eyelids, and the iris data.

在另一種可選的實施例中,機器人具有多個圖像採集單元,當對象不與機器人對視時,位於機器人的不同部位上的圖像採集單元從多個角度採集多張眼部圖像,結合圖像採集單元的位置對採集到的多張眼部圖像進行分析,來確定對象的瞳孔中心位置、紅外光源所形成的光斑的中心位置、上下眼皮的位置以及虹膜數據等信息,進而對上述信息進行分析得到對象的注視點位置。In another optional embodiment, the robot has multiple image acquisition units. When the object is not facing the robot, the image acquisition units located on different parts of the robot acquire multiple eye images from multiple angles. Based on the position of the image acquisition unit, the collected multiple eye images are analyzed to determine the subject's pupil center position, the center position of the spot formed by the infrared light source, the position of the upper and lower eyelids, and iris data. The position of the gaze point of the object is obtained by analyzing the above information.

步驟S106,交互設備根據注視點位置以及對象的對象信息輸出交互信息。In step S106, the interactive device outputs the interactive information according to the position of the gaze point and the object information of the object.

需要說明的是,對象的對象信息至少包括如下之一:對象的表情信息、對象的動作信息以及對象的語音信息。It should be noted that the object information of the object includes at least one of the following: the expression information of the object, the motion information of the object, and the voice information of the object.

在一種可選的實施例中,當對象注視機器人,並與機器人交談時,機器人通過採集對象的眼部圖像來確定對象的注視點位置,進而根據對象的注視點位置對對象進行注視,使對象感覺獲得尊重。當對象與機器人不對視時,機器人在通過採集到的對象的眼部圖像確定對象的注視點位置之後,在結合此時對象的表情、語言等輸出相應的交互信息,例如,在鞋店中,對象注視機器人的鞋子,並說“你穿的這款鞋子很漂亮啊”。機器人通過眼部圖像可確定對象的注視點位置為機器人的下部,並結合對象的語音信息可確定對象的注視點位置為鞋子,此時,機器人輸出交互信息“這款鞋子你也可以試一下,我相信也會很好看”。In an optional embodiment, when the object looks at the robot and talks with the robot, the robot determines the position of the gaze point of the object by collecting an image of the eye of the object, and then fixes the object according to the position of the gaze point of the object, so Subject feels respected. When the object and the robot are not looking at each other, the robot determines the position of the gaze point of the object through the collected eye image of the object, and then outputs the corresponding interactive information in combination with the object's expression, language, etc. The subject looks at the robot's shoes and says, "The shoes you wear are very beautiful." The robot can determine the position of the fixation point of the object as the lower part of the robot through the eye image, and determine the position of the fixation point of the object as shoes based on the voice information of the object. At this time, the robot outputs interactive information "You can also try this shoe , I believe it will also look good. "

基於上述步驟S102至步驟S106所限定的方案可以獲知,通過獲取對象的眼部圖像,然後根據眼部圖像確定對象的注視點位置,最後根據注視點位置以及對象的對象信息輸出交互信息。Based on the solutions defined in the above steps S102 to S106, it can be known that, by acquiring the eye image of the object, then determining the position of the fixation point of the object according to the eye image, and finally outputting interactive information according to the position of the fixation point and the object information of the object.

容易注意到的是,採用對眼部圖像進行分析的方法可以得到準確的對象的注視點位置。另外,在得到注視點位置之後,根據對象的對象信息可使交互設備輸出準確的交互信息,進而實現對象與交互設備的交互,提高人機交互的體驗。It is easy to notice that by using the method of analyzing the eye image, an accurate fixation point position of the object can be obtained. In addition, after the position of the fixation point is obtained, the interactive device can output accurate interactive information according to the object information of the object, thereby realizing the interaction between the object and the interactive device, and improving the human-computer interaction experience.

由上述內容可知,本發明所提供的輸出信息的方法可以達到準確識別對象的注視點位置的目的,從而實現了交互設備與對象準確交互的技術效果,進而解決了交互設備人機交互中,無法精準定位注視點的技術問題。It can be known from the foregoing that the method for outputting information provided by the present invention can achieve the purpose of accurately identifying the position of a gaze point of an object, thereby realizing the technical effect of accurate interaction between the interactive device and the object, and furthermore, the human-computer interaction of the interactive device cannot be solved. Technical issues in accurately locating gaze points.

在一種可選的實施例中,如圖2所示的一種輸出信息的方法流程圖,本發明所提供的輸出信息的方法主要包括圖像採集、圖像分析、注視點估計、融合注視點的內容處理以及數據輸出五個步驟。其中,圖像採集步驟主要是通過位於機器人上的紅外光源向對象的眼睛投射光源,然後由機器人中的圖像採集單元採集對象的眼睛以及眼睛周圍部分的圖像,得到對象的眼部圖像。其中,圖3示出了一種可選的採集圖像的過程。In an optional embodiment, as shown in the flowchart of a method for outputting information in FIG. 2, the method for outputting information provided by the present invention mainly includes image acquisition, image analysis, gaze point estimation, and fusion of gaze points. Five steps of content processing and data output. The image acquisition step is mainly to project a light source to an object's eyes through an infrared light source located on the robot, and then the image acquisition unit in the robot acquires images of the object's eyes and the surrounding parts of the eyes to obtain the object's eye image. . Among them, FIG. 3 shows an optional process of acquiring images.

需要說明的是,在得到對象的眼部圖像之後,通過相關的算法,例如,算法建模或者深度學習神經網絡訓練的算法,對眼部圖像進行分析,得到對象的眼部特徵信息,然後,再根據對象的眼部特徵信息來確定對象的注視點位置,即對注視點進行估計,具體步驟如下:It should be noted that after obtaining the eye image of the object, the eye image is analyzed through related algorithms, for example, algorithm modeling or deep learning neural network training algorithm, to obtain the eye feature information of the object. Then, the gaze point position of the object is determined according to the eye feature information of the object, that is, the gaze point is estimated. The specific steps are as follows:

步驟S202,交互設備根據眼部圖像分析對象的眼部特徵信息;Step S202: The interactive device analyzes eye feature information of the object according to the eye image;

步驟S204,交互設備根據眼部特徵信息確定對象的注視點位置。Step S204: The interactive device determines the position of the gaze point of the object according to the eye feature information.

需要說明的是,上述眼部特徵信息可包括但不限於對象的瞳孔位置、瞳孔形狀、虹膜位置、虹膜形狀、眼皮位置、眼角位置、光斑位置(即普爾欽斑位置)。It should be noted that the above-mentioned eye feature information may include, but is not limited to, the pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, and spot position (that is, the Purchin spot position) of the subject.

具體的,在得到對象的眼部特徵信息之後,機器人基於眼部特徵信息建立對象的眼球幾何空間模型,並根據眼球幾何空間模型對眼部圖像進行分析,得到對象的注視點位置。Specifically, after obtaining the eye feature information of the object, the robot establishes a geometric space model of the eyeball of the object based on the eye feature information, and analyzes the eye image according to the eyeball geometric space model to obtain the position of the gaze point of the object.

需要說明的是,由於人機交互的過程比較複雜,因此,在機器人識別對象的注視點位置之後,機器人再通過識別對象的表情、動作、語音等信息,對對象的行為進行二次識別,進而使機器人作出準確的反饋,即完成融合注視點的內容處理。另外,在完成融合注視點的內容處理之後,機器人根據注視點位置以及對象的對象信息輸出交互信息,具體步驟如下:It should be noted that, because the process of human-computer interaction is relatively complicated, after the robot recognizes the position of the gaze point of the object, the robot re-identifies the behavior of the object by recognizing the object's expression, motion, and speech, and further Make the robot make accurate feedback, that is, complete the content processing of the fused fixation point. In addition, after the content processing of the fused fixation point is completed, the robot outputs interactive information according to the position of the fixation point and the object information of the object. The specific steps are as follows:

步驟S1060,交互設備對注視點位置以及對象的對象信息進行處理,得到處理結果;Step S1060: The interactive device processes the position of the gaze point and the object information of the object to obtain a processing result;

步驟S1062,交互設備根據處理結果確定操作指令;Step S1062, the interactive device determines an operation instruction according to the processing result;

步驟S1064,交互設備將操作指令發送至交互設備的對應部件,其中,操作指令用於控制對應部件輸出交互信息。Step S1064: The interactive device sends an operation instruction to a corresponding component of the interactive device, where the operation instruction is used to control the corresponding component to output the interaction information.

在一種可選的實施例中,機器人的處理中心可對注視點位置以及對象的對象信息進行處理,確定機器人的每個部件所應作出的反饋,然後再確定機器人所要作出的反饋所對應的操作指令,並將操作指令發送到對應的部件中。例如,機器人的處理中心確定機器人的眼部需要執行眨眼的動作,手部需要執行抬起的動作,此時,機器人的處理中心將生成與眨眼對應的眨眼指令和與抬起對應的轉動指令,並將眨眼指令發送到控制眼睛的部件,將轉動指令發送至控制手部的部件。In an optional embodiment, the processing center of the robot may process the position of the gaze point and the object information of the object, determine the feedback that each part of the robot should make, and then determine the operation corresponding to the feedback that the robot will make Instructions and send operation instructions to the corresponding components. For example, the processing center of the robot determines that the eyes of the robot need to perform a blinking action, and the hands need to perform a lifting action. At this time, the processing center of the robot will generate a blinking instruction corresponding to the blink and a rotation instruction corresponding to the lifting. A blink command is sent to the part that controls the eye, and a rotation command is sent to the part that controls the hand.

在一種可選的實施例中,本發明所提供的輸出信息的方法可應用在餐飲行業,例如,機器人在飯店內接待顧客、為顧客點餐、上餐、為顧客結帳以及送顧客出門的場景。具體的,顧客走在街上尋找合適餐館就餐,當顧客注視到該餐館店名、店內餐桌和店內過往的行人時,機器人通過圖像採集單元或者具有眼動儀功能的眼動模塊可確定顧客的注視方向以及注視點位置,其中,如果顧客的注視方向為該餐館,則機器人將走近該顧客,並發出“歡迎進來品嘗”的語音提示。在顧客進入餐館之後,機器人帶領顧客走到空閒的餐桌旁。當餐館有多個機器人時,如果顧客注視其他的機器人,其他機器人上的眼動模塊同樣能夠識別其他顧客的注視方向,並向顧客發出“歡迎光臨”的語音。當顧客選座位時,機器人通過眼動模塊預估顧客所注視的座位,並確定顧客注視的時間,或者根據顧客瞳孔變化來分析顧客對不同座位喜好程度,進而向顧客發出推薦座位的語音,並引領顧客到相應的座位上。當顧客選菜時,機器人將身體上的顯示屏展示給顧客,以使顧客選菜。其中,當顧客注視顯示屏時,機器人可以通過眼動模塊來確定顧客注視菜品的時長、次數,並對注視菜品的時長以及次數等信息進行數據分析,進而判斷顧客有意向的菜品以及顧客潛在喜好,並通過顯示屏向顧客推薦菜品或者直接下單。在給顧客上餐的過程中,機器人通過眼動模塊來確定哪個顧客在注視哪個菜,並且注視時間相對長,從而將相應的菜放到對應的顧客旁邊。在顧客就餐的過程中,機器人通過眼動模塊來檢測到顧客是否注視機器人,如果檢測到顧客正在注視機器人,則說明顧客有強烈的需求需要該機器人來幫忙,此時,機器人立即回應對視顧客,並微笑向顧客移動。最後顧客用餐完畢,多個顧客同時注視的機器人時,機器人的眼動模塊通過分析顧客的就餐時間、餐桌上的食物的剩餘量來確定顧客是否要結帳。如果確定顧客需要結帳,則機器人完成結帳工作。當顧客離開餐館時,顧客如果注視店內的機器人,則機器人的眼動模塊可識別到顧客正在注視它,此時,機器人迎著顧客的視線方向與顧客對視以表禮儀,以此可以得到更好的人機交互效果。In an optional embodiment, the method for outputting information provided by the present invention can be applied to the restaurant industry, for example, robots receive customers in restaurants, order food for customers, serve meals, check out customers, and send customers out Scenes. Specifically, the customer walks on the street to find a suitable restaurant to eat. When the customer looks at the restaurant's store name, the table in the store, and the passing pedestrians in the store, the robot can determine through the image acquisition unit or the eye tracking module with an eye tracker function. The customer's gaze direction and gaze point position. If the customer's gaze direction is the restaurant, the robot will approach the customer and issue a voice prompt "Welcome to taste". After the customer enters the restaurant, the robot leads the customer to the free dining table. When a restaurant has multiple robots, if a customer looks at other robots, the eye movement module on the other robots can also recognize the other customers' gaze directions and issue a "welcome" voice to the customers. When the customer selects a seat, the robot estimates the seat that the customer is watching through the eye movement module, and determines the time that the customer is watching, or analyzes the customer's preference for different seats according to the change of the customer's pupil, and then sends a voice to the customer to recommend the seat, and Lead customers to the appropriate seats. When the customer selects a dish, the robot displays the display on the body to the customer to make the customer select the dish. Among them, when the customer looks at the display screen, the robot can determine the length and number of times the customer looks at the dish through the eye movement module, and analyzes the data such as the length and number of times the gaze is taken to determine the dish and the customer's intention. Potential preferences and recommend dishes to customers via the display or place orders directly. In the process of serving customers, the robot uses an eye movement module to determine which customer is watching which dish, and the staring time is relatively long, so that the corresponding dish is placed next to the corresponding customer. During the customer's meal, the robot detects whether the customer is looking at the robot through the eye movement module. If it is detected that the customer is looking at the robot, it indicates that the customer has a strong demand for the robot to help. At this time, the robot immediately responds to the facing customer. And smile to the customer. Finally, when the customer finishes the meal and the robot is watching by multiple customers at the same time, the robot's eye movement module determines whether the customer wants to check out by analyzing the customer's meal time and the remaining amount of food on the table. If it is determined that the customer needs to checkout, the robot completes the checkout work. When the customer leaves the restaurant, if the customer looks at the robot in the store, the robot's eye movement module can recognize that the customer is staring at it. At this time, the robot faces the customer with the customer's gaze direction to show etiquette. Better human-computer interaction.

實施例Examples 22

根據本發明實施例,還提供了一種輸出信息的系統實施例,該系統可執行實施例1中的輸出信息的方法,其中,圖4是根據本發明實施例的輸出信息的系統結構示意圖,如圖4所示,該系統包括:圖像採集設備401以及處理器403。According to the embodiment of the present invention, an embodiment of a system for outputting information is also provided. The system can execute the method for outputting information in Embodiment 1. FIG. 4 is a schematic structural diagram of a system for outputting information according to an embodiment of the present invention. As shown in FIG. 4, the system includes: an image acquisition device 401 and a processor 403.

其中,圖像採集設備401,用於獲取對象的眼部圖像;處理器403,與圖像採集設備連接,用於根據眼部圖像確定對象的注視點位置,並根據注視點位置以及對象的對象信息輸出交互信息。Among them, an image acquisition device 401 is used to acquire an eye image of an object; a processor 403 is connected to the image acquisition device and used to determine the position of the gaze point of the object according to the eye image, and according to the position of the gaze point and the object The object information outputs interactive information.

需要說明的是,交互設備包括:機器人;對象的對象信息至少包括如下之一:對象的表情信息、對象的動作信息以及對象的語音信息。It should be noted that the interactive device includes: a robot; the object information of the object includes at least one of the following: the expression information of the object, the motion information of the object, and the voice information of the object.

由上可知,通過圖像採集設備獲取對象的眼部圖像,與圖像採集設備連接的處理器根據眼部圖像確定對象的注視點位置,並根據注視點位置以及對象的對象信息輸出交互信息。It can be known from the above that the eye image of the object is obtained through the image acquisition device, and the processor connected to the image acquisition device determines the position of the gaze point of the object according to the eye image, and outputs interaction based on the position of the gaze point and the object information of the object information.

容易注意到的是,採用對眼部圖像進行分析的方法可以得到準確的對象的注視點位置。另外,在得到注視點位置之後,根據對象的對象信息可使交互設備輸出準確的交互信息,進而實現對象與交互設備的交互,提高人機交互的體驗。It is easy to notice that by using the method of analyzing the eye image, an accurate fixation point position of the object can be obtained. In addition, after the position of the fixation point is obtained, the interactive device can output accurate interactive information according to the object information of the object, thereby realizing the interaction between the object and the interactive device, and improving the human-computer interaction experience.

由上述內容可知,本發明所提供的輸出信息的方法可以達到準確識別對象的注視點位置的目的,從而實現了交互設備與對象準確交互的技術效果,進而解決了交互設備人機交互中,無法精準定位注視點的技術問題。It can be known from the foregoing that the method for outputting information provided by the present invention can achieve the purpose of accurately identifying the position of a gaze point of an object, thereby realizing the technical effect of accurate interaction between the interactive device and the object, and furthermore, the human-computer interaction of the interactive device cannot be solved. Technical issues in accurately locating gaze points.

在一種可選的實施例中,處理器還用於根據眼部圖像分析對象的眼部特徵信息,並根據眼部特徵信息確定對象的注視點位置,其中,眼部特徵信息可包括但不限於對象的瞳孔位置、瞳孔形狀、虹膜位置、虹膜形狀、眼皮位置、眼角位置、光斑位置(即普爾欽斑位置)。In an optional embodiment, the processor is further configured to analyze the eye feature information of the object according to the eye image, and determine the position of the fixation point of the object according to the eye feature information. The eye feature information may include but not Limited to subject's pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, light spot position (that is, Purchin spot position).

在一種可選的實施例中,處理器還用於對注視點位置以及對象的對象信息進行處理,得到處理結果,然後根據處理結果確定操作指令,最後將操作指令發送至交互設備的對應部件,其中,操作指令用於控制對應部件輸出交互信息。In an optional embodiment, the processor is further configured to process the position of the gaze point and the object information of the object to obtain a processing result, then determine an operation instruction according to the processing result, and finally send the operation instruction to the corresponding component of the interactive device. The operation instruction is used to control the corresponding component to output interactive information.

實施例Examples 33

根據本發明實施例,還提供了一種輸出信息的裝置實施例,其中,圖5是根據本發明實施例的輸出信息的裝置結構示意圖,如圖5所示,該裝置包括:獲取模塊501、確定模塊503以及輸出模塊505。According to an embodiment of the present invention, there is also provided an embodiment of an apparatus for outputting information. FIG. 5 is a schematic structural diagram of an apparatus for outputting information according to an embodiment of the present invention. As shown in FIG. 5, the apparatus includes: an acquiring module 501, Module 503 and output module 505.

其中,獲取模塊501,用於交互設備獲取對象的眼部圖像;確定模塊503,用於交互設備根據眼部圖像確定對象的注視點位置;輸出模塊505,用於交互設備根據注視點位置以及對象的對象信息輸出交互信息。The obtaining module 501 is used by the interactive device to obtain the eye image of the object; the determining module 503 is used by the interactive device to determine the position of the fixation point of the object based on the eye image; the output module 505 is used by the interactive device to determine the position of the fixation point of the object And the object information of the object outputs interactive information.

需要說明的是,上述獲取模塊501、確定模塊503以及輸出模塊505對應於實施例1中的步驟S102至步驟S106,三個模塊與對應的步驟所實現的示例和應用場景相同,但不限於上述實施例1所公開的內容。It should be noted that the acquisition module 501, the determination module 503, and the output module 505 correspond to steps S102 to S106 in Embodiment 1. The three modules and the corresponding steps implement the same examples and application scenarios, but are not limited to the above. What was disclosed in Example 1.

需要說明的是,交互設備包括:機器人;對象的對象信息至少包括如下之一:對象的表情信息、對象的動作信息以及對象的語音信息。It should be noted that the interactive device includes: a robot; the object information of the object includes at least one of the following: the expression information of the object, the motion information of the object, and the voice information of the object.

在一種可選的實施例中,確定模塊包括:第一確定模塊、第二確定模塊以及第三確定模塊。其中,第一確定模塊,用於交互設備根據眼部圖像分析對象的眼部特徵信息;第三確定模塊,用於交互設備根據眼部特徵信息確定對象的注視點位置,其中,眼部特徵信息可包括但不限於對象的瞳孔位置、瞳孔形狀、虹膜位置、虹膜形狀、眼皮位置、眼角位置、光斑位置(即普爾欽斑位置)。In an optional embodiment, the determining module includes a first determining module, a second determining module, and a third determining module. The first determining module is used by the interactive device to analyze the eye feature information of the object according to the eye image; the third determining module is used by the interactive device to determine the position of the gaze point of the object according to the eye feature information, wherein the eye feature The information may include, but is not limited to, the pupil's pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, light spot position (ie, Pulcin spot position).

需要說明的是,上述第一確定模塊、第二確定模塊以及第三確定模塊對應於實施例1中的步驟S202至步驟S204,兩個模塊與對應的步驟所實現的示例和應用場景相同,但不限於上述實施例1所公開的內容。It should be noted that the first determination module, the second determination module, and the third determination module correspond to steps S202 to S204 in Embodiment 1. The two modules have the same examples and application scenarios as the corresponding steps, but It is not limited to the content disclosed in the first embodiment.

在一種可選的實施例中,輸出模塊包括:處理模塊、第六確定模塊以及控制模塊。其中,處理模塊,用於交互設備對注視點位置以及對象的對象信息進行處理,得到處理結果;第六確定模塊,用於交互設備根據處理結果確定操作指令;控制模塊,用於交互設備將操作指令發送至交互設備的對應部件,其中,操作指令用於控制對應部件輸出交互信息。In an optional embodiment, the output module includes a processing module, a sixth determining module, and a control module. The processing module is used by the interactive device to process the position of the gaze point and the object information of the object to obtain a processing result; the sixth determination module is used by the interactive device to determine the operation instruction according to the processing result; the control module is used by the interactive device to operate The instruction is sent to the corresponding component of the interactive device, and the operation instruction is used to control the corresponding component to output the interactive information.

需要說明的是,上述處理模塊、第六確定模塊以及控制模塊對應於實施例1中的步驟S1060至步驟S1064,三個模塊與對應的步驟所實現的示例和應用場景相同,但不限於上述實施例1所公開的內容。It should be noted that the processing module, the sixth determination module, and the control module correspond to steps S1060 to S1064 in Embodiment 1. The three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the above implementation. What was disclosed in Example 1.

實施例Examples 44

根據本發明實施例的另一方面,還提供了一種存儲介質,該存儲介質包括存儲的程序,其中,程序執行實施例1中的輸出信息的方法。According to another aspect of the embodiments of the present invention, a storage medium is also provided, and the storage medium includes a stored program, where the program executes the method for outputting information in Embodiment 1.

實施例Examples 55

根據本發明實施例的另一方面,還提供了一種處理器,該處理器用於運行程序,其中,程序運行實施例1中的輸出信息的方法。According to another aspect of the embodiments of the present invention, there is also provided a processor for running a program, where the program runs the method for outputting information in Embodiment 1.

上述本發明實施例序號僅僅為了描述,不代表實施例的優劣。The sequence numbers of the foregoing embodiments of the present invention are only for description, and do not represent the superiority or inferiority of the embodiments.

在本發明的上述實施例中,對各個實施例的描述都各有側重,某個實施例中沒有詳述的部分,可以參見其他實施例的相關描述。In the above embodiments of the present invention, the description of each embodiment has its own emphasis. For a part that is not described in detail in an embodiment, reference may be made to the description of other embodiments.

在本發明所提供的幾個實施例中,應該理解到,所揭露的技術內容,可通過其它的方式實現。其中,以上所描述的裝置實施例僅僅是示意性的,例如所述單元的劃分,可以為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或組件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通信連接可以是通過一些接口,單元或模塊的間接耦合或通信連接,可以是電性或其它的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed technical content may be implemented in other ways. The device embodiments described above are merely schematic. For example, the division of the unit may be a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or integrated To another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or other forms.

所述作為分離部件說明的單元可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者也可以分布到多個單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.

另外,在本發明各個實施例中的各功能單元可以集成在一個處理單元中,也可以是各個單元單獨物理存在,也可以兩個或兩個以上單元集成在一個單元中。上述集成的單元既可以採用硬件的形式實現,也可以採用軟件功能單元的形式實現。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit. The above integrated unit may be implemented in the form of hardware or in the form of software functional unit.

所述集成的單元如果以軟件功能單元的形式實現並作為獨立的產品銷售或使用時,可以存儲在一個計算機可讀取存儲介質中。基於這樣的理解,本發明的技術方案本質上或者說對現有技術做出貢獻的部分或者該技術方案的全部或部分可以以軟件產品的形式體現出來,該計算機軟件產品存儲在一個存儲介質中,包括若干指令用以使得一台計算機設備(可為個人計算機、服務器或者網絡設備等)執行本發明各個實施例所述方法的全部或部分步驟。而前述的存儲介質包括:U盤、只讀存儲器(ROM,Read-Only Memory)、隨機存取存儲器(RAM,Random Access Memory)、移動硬盤、磁碟或者光盤等各種可以存儲程序代碼的介質。When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention essentially or part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium, Several instructions are included to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in various embodiments of the present invention. The foregoing storage medium includes: a U disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a mobile hard disk, a magnetic disk, or an optical disk, and other media that can store program codes.

以上所述僅是本發明的優選實施方式,應當指出,對於本發明所屬技術領域中具有通常知識者來說,在不脫離本發明原理的前提下,還可以做出若干改進和潤飾,這些改進和潤飾也應視為本發明的保護範圍。The above is only a preferred embodiment of the present invention. It should be noted that for those who have ordinary knowledge in the technical field to which the present invention pertains, without departing from the principles of the present invention, several improvements and retouches can be made. These improvements And retouching should also be regarded as the protection scope of the present invention.

綜上所述,本發明所揭露之技術手段確能有效解決習知等問題,並達致預期之目的與功效,且申請前未見諸於刊物、未曾公開使用且具長遠進步性,誠屬專利法所稱之發明無誤,爰依法提出申請,懇祈 鈞上惠予詳審並賜准發明專利,至感德馨。In summary, the technical means disclosed in the present invention can effectively solve problems such as knowledge, and achieve the intended purpose and effect. It has not been published in publications before application, has not been publicly used, and has long-term progress. The invention referred to in the Patent Law is correct, and he filed an application in accordance with the law. He earnestly hopes that Jun will give him a detailed review and grant a patent for the invention.

惟以上所述者,僅為本發明之數種較佳實施例,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及發明說明書內容所作之等效變化與修飾,皆應仍屬本發明專利涵蓋之範圍內。However, the above are only a few preferred embodiments of the present invention. When the scope of implementation of the present invention cannot be limited in this way, that is, equivalent changes and modifications made according to the scope of the patent application and the content of the invention specification of the present invention are all It should still fall within the scope of the invention patent.

〔本發明〕〔this invention〕

401‧‧‧圖像採集設備 401‧‧‧Image Acquisition Equipment

403‧‧‧處理器 403‧‧‧Processor

501‧‧‧獲取模塊 501‧‧‧Get Module

503‧‧‧確定模塊 503‧‧‧ Determine the module

505‧‧‧輸出模塊 505‧‧‧output module

S102-S106‧‧‧步驟 S102-S106‧‧‧step

圖1是根據本發明實施例的一種輸出信息的方法流程圖。FIG. 1 is a flowchart of a method for outputting information according to an embodiment of the present invention.

圖2是根據本發明實施例的一種可選的輸出信息的方法流程圖。 FIG. 2 is a flowchart of an optional method for outputting information according to an embodiment of the present invention.

圖3是根據本發明實施例的一種可選的採集圖像的過程示意圖。 FIG. 3 is a schematic diagram of an optional process of acquiring an image according to an embodiment of the present invention.

圖4是根據本發明實施例的一種輸出信息的系統結構示意圖。 FIG. 4 is a schematic structural diagram of a system for outputting information according to an embodiment of the present invention.

圖5是根據本發明實施例的一種輸出信息的裝置結構示意圖。 FIG. 5 is a schematic structural diagram of a device for outputting information according to an embodiment of the present invention.

Claims (15)

一種輸出信息的方法,其特徵在於,包括: 交互設備獲取對象的眼部圖像; 所述交互設備根據所述眼部圖像確定所述對象的注視點位置; 所述交互設備根據所述注視點位置以及所述對象的對象信息輸出交互信息。A method for outputting information includes: The interactive device acquires the eye image of the subject; Determining, by the interactive device, a gaze point position of the object according to the eye image; The interaction device outputs interaction information according to the gaze point position and the object information of the object. 如申請專利範圍第1項所述之方法,其中,所述交互設備包括:機器人。The method according to item 1 of the scope of patent application, wherein the interactive device comprises: a robot. 如申請專利範圍第1項所述之方法,其中,所述對象的對象信息至少包括如下之一:所述對象的表情信息、所述對象的動作信息以及所述對象的語音信息。The method according to item 1 of the scope of patent application, wherein the object information of the object includes at least one of the following: expression information of the object, motion information of the object, and voice information of the object. 如申請專利範圍第1項所述之方法,其中,所述交互設備根據所述眼部圖像確定所述對象的注視點位置,包括: 所述交互設備根據所述眼部圖像分析所述對象的眼部特徵信息; 所述交互設備根據所述眼部特徵信息確定所述對象的注視點位置。The method according to item 1 of the scope of patent application, wherein the determining, by the interactive device, the position of the fixation point of the object according to the eye image includes: Analyzing, by the interactive device, eye feature information of the subject according to the eye image; The interaction device determines a gaze point position of the object according to the eye feature information. 如申請專利範圍第4項所述之方法,其中,所述眼部特徵信息至少包括如下之一:瞳孔位置、瞳孔形狀、虹膜位置、虹膜形狀、眼皮位置、眼角位置、光斑位置。The method according to item 4 of the scope of patent application, wherein the eye feature information includes at least one of the following: pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, and spot position. 如申請專利範圍第1項所述之方法,其中,所述交互設備根據所述注視點位置以及所述對象的對象信息輸出交互信息,包括: 所述交互設備對所述注視點位置以及所述對象的對象信息進行處理,得到處理結果; 所述交互設備根據所述處理結果確定操作指令; 所述交互設備將所述操作指令發送至所述交互設備的對應部件,其中,所述操作指令用於控制所述對應部件輸出交互信息。The method according to item 1 of the scope of patent application, wherein the interactive device outputs interactive information according to the position of the gaze point and the object information of the object, including: The interactive device processes the position of the gaze point and the object information of the object to obtain a processing result; Determining, by the interactive device, an operation instruction according to the processing result; The interaction device sends the operation instruction to a corresponding component of the interaction device, wherein the operation instruction is used to control the corresponding component to output interaction information. 一種輸出信息的系統,其特徵在於,包括: 圖像採集設備,用於獲取對象的眼部圖像; 處理器,與所述圖像採集設備連接,用於根據所述眼部圖像確定所述對象的注視點位置,並根據所述注視點位置以及所述對象的對象信息輸出交互信息。A system for outputting information includes: An image acquisition device for acquiring an eye image of a subject; The processor is connected to the image acquisition device, and is configured to determine a gaze point position of the object according to the eye image, and output interactive information according to the gaze point position and object information of the object. 一種輸出信息的裝置,其特徵在於,包括: 獲取模塊,用於交互設備獲取對象的眼部圖像; 確定模塊,用於所述交互設備根據所述眼部圖像確定所述對象的注視點位置; 輸出模塊,用於所述交互設備根據所述注視點位置以及所述對象的對象信息輸出交互信息。A device for outputting information includes: An acquisition module for acquiring an eye image of an object by an interactive device; A determining module, configured to determine, by the interactive device, a gaze point position of the object according to the eye image; An output module is used by the interaction device to output interaction information according to the position of the gaze point and the object information of the object. 如申請專利範圍第8項所述之裝置,其中,所述交互設備包括:機器人。The device according to item 8 of the scope of patent application, wherein the interactive device includes: a robot. 如申請專利範圍第8項所述之裝置,其中,所述對象的對象信息至少包括如下之一:所述對象的表情信息、所述對象的動作信息以及所述對象的語音信息。The device according to item 8 of the scope of patent application, wherein the object information of the object includes at least one of the following: expression information of the object, motion information of the object, and voice information of the object. 如申請專利範圍第8項所述之裝置,其中,所述確定模塊包括: 第一確定模塊,用於所述交互設備根據所述眼部圖像分析所述對象的眼部特徵信息; 第三確定模塊,用於所述交互設備根據所述眼部特徵信息確定所述對象的注視點位置。The device according to item 8 of the scope of patent application, wherein the determining module includes: A first determining module, configured to analyze, by the interaction device, eye feature information of the object according to the eye image; A third determining module, configured to determine, by the interaction device, a gaze point position of the object according to the eye feature information. 如申請專利範圍第11項所述之裝置,其中,所述眼部特徵信息至少包括如下之一:瞳孔位置、瞳孔形狀、虹膜位置、虹膜形狀、眼皮位置、眼角位置、光斑位置。The device according to item 11 of the scope of patent application, wherein the eye feature information includes at least one of the following: pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, and spot position. 如申請專利範圍第8項所述之裝置,其中,所述輸出模塊包括: 處理模塊,用於所述交互設備對所述注視點位置以及所述對象的對象信息進行處理,得到處理結果; 第六確定模塊,用於所述交互設備根據所述處理結果確定操作指令; 控制模塊,用於所述交互設備將所述操作指令發送至所述交互設備的對應部件,其中,所述操作指令用於控制所述對應部件輸出交互信息。The device according to item 8 of the scope of patent application, wherein the output module includes: A processing module, configured to process, by the interactive device, the position of the gaze point and the object information of the object to obtain a processing result; A sixth determining module, configured to determine an operation instruction by the interactive device according to the processing result; A control module is configured for the interaction device to send the operation instruction to a corresponding component of the interaction device, wherein the operation instruction is used to control the corresponding component to output interaction information. 一種存儲介質,其特徵在於,所述存儲介質包括存儲的程序,其中,所述程序執行如申請專利範圍第1至6項中任一項所述之輸出信息的方法。A storage medium, characterized in that the storage medium includes a stored program, wherein the program executes the method for outputting information according to any one of claims 1 to 6 of the scope of patent application. 一種處理器,其特徵在於,所述處理器用於運行程序,其中,所述程序運行如申請專利範圍第1至6項中任一項所述之輸出信息的方法。A processor is characterized in that the processor is used to run a program, wherein the program runs the method for outputting information as described in any one of claims 1 to 6 of the scope of patent application.
TW108109579A 2018-03-27 2019-03-20 Information output method, apparatus and system TW201942714A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810259997.0A CN108803866A (en) 2018-03-27 2018-03-27 The methods, devices and systems of output information
??201810259997.0 2018-03-27

Publications (1)

Publication Number Publication Date
TW201942714A true TW201942714A (en) 2019-11-01

Family

ID=64095417

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108109579A TW201942714A (en) 2018-03-27 2019-03-20 Information output method, apparatus and system

Country Status (3)

Country Link
CN (1) CN108803866A (en)
TW (1) TW201942714A (en)
WO (1) WO2019184620A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108803866A (en) * 2018-03-27 2018-11-13 北京七鑫易维信息技术有限公司 The methods, devices and systems of output information
CN109683705A (en) * 2018-11-30 2019-04-26 北京七鑫易维信息技术有限公司 The methods, devices and systems of eyeball fixes control interactive controls
CN110032982B (en) * 2019-04-22 2021-05-25 广东博智林机器人有限公司 Robot guiding method, device, robot and storage medium
CN111079555A (en) * 2019-11-25 2020-04-28 Oppo广东移动通信有限公司 User preference degree determining method and device, electronic equipment and readable storage medium
CN110928415B (en) * 2019-12-04 2020-10-30 上海飘然工程咨询中心 Robot interaction method based on facial actions

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1174337C (en) * 2002-10-17 2004-11-03 南开大学 Apparatus and method for identifying gazing direction of human eyes and its use
US9046924B2 (en) * 2009-03-04 2015-06-02 Pelmorex Canada Inc. Gesture based interaction with traffic data
CN103324287B (en) * 2013-06-09 2016-01-20 浙江大学 The method and system with the area of computer aided sketch drafting of brush stroke data is moved based on eye
US20170262051A1 (en) * 2015-03-20 2017-09-14 The Eye Tribe Method for refining control by combining eye tracking and voice recognition
CN104951808B (en) * 2015-07-10 2018-04-27 电子科技大学 A kind of 3D direction of visual lines methods of estimation for robot interactive object detection
CN105159460B (en) * 2015-09-10 2018-01-23 哈尔滨理工大学 The control method of the intelligent domestic appliance controller based on eye-tracking
CN105868694B (en) * 2016-03-24 2019-03-08 中国地质大学(武汉) The bimodal emotion recognition method and system acted based on facial expression and eyeball
CN105912120B (en) * 2016-04-14 2018-12-21 中南大学 Mobile robot man-machine interaction control method based on recognition of face
CN106294678B (en) * 2016-08-05 2020-06-26 北京光年无限科技有限公司 Topic initiating device and method for intelligent robot
CN106407772A (en) * 2016-08-25 2017-02-15 北京中科虹霸科技有限公司 Human-computer interaction and identity authentication device and method suitable for virtual reality equipment
CN106774837A (en) * 2016-11-23 2017-05-31 河池学院 A kind of man-machine interaction method of intelligent robot
CN107221332A (en) * 2017-06-28 2017-09-29 上海与德通讯技术有限公司 The exchange method and system of robot
CN107450729B (en) * 2017-08-10 2019-09-10 上海木木机器人技术有限公司 Robot interactive method and device
CN108803866A (en) * 2018-03-27 2018-11-13 北京七鑫易维信息技术有限公司 The methods, devices and systems of output information

Also Published As

Publication number Publication date
WO2019184620A1 (en) 2019-10-03
CN108803866A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
TW201942714A (en) Information output method, apparatus and system
WO2021135432A1 (en) System and method of hand gesture detection
CN111508079B (en) Virtual clothes try-on method and device, terminal equipment and storage medium
JP5863423B2 (en) Information processing apparatus, information processing method, and program
CN104520849B (en) Use the search user interface of external physical expression
US11844608B2 (en) Posture analysis systems and methods
CN108681399B (en) Equipment control method, device, control equipment and storage medium
US20190018486A1 (en) Augmented mirror
WO2021244145A1 (en) Head-mounted display device interaction method, terminal device, and storage medium
US20220044311A1 (en) Method for enhancing a user's image while e-commerce shopping for the purpose of enhancing the item that is for sale
WO2022227775A1 (en) Method and device for controlling air conditioner, and air conditioner
CN108965443A (en) Intelligent elevated table height adjusting method, device, intelligent elevated table and storage medium
JP6109288B2 (en) Information processing apparatus, information processing method, and program
WO2017108700A1 (en) Augmented mirror
WO2017108703A1 (en) Augmented mirror
WO2017143952A1 (en) Human face detection method
WO2023124026A1 (en) Robot control method and system, computer device, storage medium and computer program product
Zlatintsi et al. Multimodal signal processing and learning aspects of human-robot interaction for an assistive bathing robot
Nakanishi et al. Vision-based face tracking system for large displays
CN112860068A (en) Man-machine interaction method, device, electronic equipment, medium and computer program product
US10019489B1 (en) Indirect feedback systems and methods
WO2017108702A1 (en) Augmented mirror
US20210174064A1 (en) Method for analyzing and evaluating facial muscle status
Cruceat et al. Extracting human features to enhance the user experience on a training station for manual operations
CN109255674A (en) A kind of examination adornment data processing system and method