WO2018099076A1 - 展示人员状态的方法及装置 - Google Patents

展示人员状态的方法及装置 Download PDF

Info

Publication number
WO2018099076A1
WO2018099076A1 PCT/CN2017/091319 CN2017091319W WO2018099076A1 WO 2018099076 A1 WO2018099076 A1 WO 2018099076A1 CN 2017091319 W CN2017091319 W CN 2017091319W WO 2018099076 A1 WO2018099076 A1 WO 2018099076A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
facial
data
facial expression
visual data
Prior art date
Application number
PCT/CN2017/091319
Other languages
English (en)
French (fr)
Inventor
陈波
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018099076A1 publication Critical patent/WO2018099076A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present disclosure relates to information display technologies, such as methods and apparatus related to displaying a person's status.
  • the scheme for displaying the personnel status may include: installing a display screen, connecting with the control circuit through the data line, and setting the control circuit to a place touched by the operator, the control circuit including a button, a speaker and a single chip, a combination of a button or a button as a single chip microcomputer input signal.
  • the statement corresponding to the combination of the button or the button is preset in the single chip chip, and the different display contents are selected through the button.
  • the usage scene includes: when a person obstructs someone else, a button can be pressed, the display will display an apology or a cute expression, and the simple operation can alleviate the dispute in life.
  • the above-mentioned personnel state display scheme requires an operator to manually operate, which is cumbersome, and if applied to a driver, may cause certain driving safety hazards.
  • some corresponding simple information is preset by a combination of buttons and buttons, the implementation manner is relatively primitive, and there is a possibility of misoperation, and the cost for learning and remembering by the user is high.
  • the present disclosure provides a method and apparatus for displaying a person's state, which can quickly and conveniently display the current state of the person to the surrounding driver without requiring the driver to memorize.
  • the embodiment provides a method for displaying a person's state, comprising: collecting facial data of a person; identifying a facial expression of the person according to the facial data of the person; converting the facial expression of the person into visual data; The visual data is displayed.
  • collecting facial data of the person includes: tracking a facial position of the person; comparing the facial position with standard coordinates; and when determining that the facial position falls within a standard coordinate range, collecting facial data of the person.
  • the recognizing the facial expression of the person according to the facial data of the person comprises: determining a facial position coordinate according to the collected facial data of the person, comparing the facial features position coordinate with the facial feature position coordinate of the facial expression template; When the facial features position coordinates are within the threshold value of the facial expression position of the expression template, the facial expression of the collected person is recognized.
  • the displaying the visual data comprises at least one of: displaying visual data in one or more positions of the vehicle body; and displaying the visual data on the display corresponding to the button operation according to the button operation position.
  • the method before converting the facial expression of the person into visual data, the method further includes: identifying a received voice instruction, identifying a facial expression of a person represented by the voice instruction, and determining the voice instruction
  • the facial expression of the person indicated is consistent with the facial expression of the person recognized according to the facial expression of the person. If they are consistent, the facial expression of the person is converted into visual data; if not, the facial data of the person is continuously collected.
  • the collecting personnel's facial data includes: periodically collecting the facial data of the person; or, after receiving the starting instruction, starting the facial data of the collecting person.
  • the embodiment further provides an apparatus for displaying a state of a person, comprising: an acquisition unit, an identification unit, and a display unit.
  • the collecting unit is configured to collect facial data of the person;
  • the identifying unit is configured to recognize the facial expression of the person according to the facial data of the person, and convert the facial expression of the person into visual data;
  • the display unit is configured to display the visual data.
  • the identification unit comprises: a face data identification module, a face data conversion output module, and a visual data transmission module.
  • the facial data recognition module is configured to identify the facial data collected by the collecting unit and convert the facial expression into a person; the facial data conversion output module is configured to convert the facial expression of the person into visual data; the visual data transmission module is configured To transmit the visual data to the display unit.
  • the device further comprises a voice recognition unit and a determination unit.
  • the voice recognition unit is configured to receive a voice command, identify a voice command, and identify a facial expression of a person represented by the voice command; the determining unit is configured to determine a facial expression of the person represented by the voice command and the Whether the facial expression of the person recognized by the recognition unit is consistent, if Consistently, the recognition unit is triggered to convert the facial expression of the person into visual data; if not, the trigger acquisition unit continues to collect the facial data of the person.
  • the display unit further includes a button configured to display the visual data in a display position corresponding to the button operation according to the button operation.
  • the method and device for displaying personnel status provided by this embodiment enable other personnel to judge the current state of the person according to the displayed visual data, and does not require personnel to memorize the excessive combination relationship, and the learning cost is low.
  • the embodiment When the embodiment is applied to a driver and a passenger, the state of the current person can be quickly and conveniently displayed to the surrounding driver, some types of traffic accidents can be avoided, and the driving civilization can be improved.
  • the embodiment can also be implemented by using existing terminal devices in various types of vehicles, which is simple and convenient to implement, and has a wider application range.
  • the embodiment further provides a computer readable storage medium storing computer executable instructions for performing the above method.
  • the embodiment also provides an electronic device including one or more processors, a memory, and one or more programs, the one or more programs being stored in the memory when executed by one or more processors When performing the above method.
  • the embodiment further provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer Having the computer perform any of the methods described above.
  • FIG. 1 is a schematic diagram of an implementation process of a method for displaying a person state in the embodiment.
  • FIG. 2 is a schematic diagram of an implementation process of collecting facial data of a person in the embodiment.
  • FIG. 3 is a schematic diagram of a process for realizing facial expressions of a person according to human face data in the embodiment.
  • FIG. 4 is a schematic structural diagram of an apparatus for displaying a state of a person in the embodiment.
  • FIG. 5 is a schematic structural diagram of an identification unit in the embodiment.
  • FIG. 6 is a schematic diagram of a general hardware structure of an electronic device according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of an implementation process of a method for displaying a person state in the embodiment. As shown in FIG. 1 , in this embodiment, a method for displaying a person state includes the following steps.
  • step 101 the face data of the person is collected.
  • the facial data may include personnel facial facial position data.
  • the method may further include: setting a length of time for the periodic execution; the length of the periodic execution may be input into the terminal by the user in advance, and the terminal receives the setting operation and performs storage; The terminal sets itself, and if it is set to 5 minutes, the terminal periodically collects the face data of the person at intervals of the length of time.
  • the terminal can also receive a startup command input by the user. After receiving the startup command, the terminal starts to work and collects the facial data of the person, so that the operator can manually control the start running time of the terminal.
  • step 102 the facial expression of the person is identified based on the facial data of the person.
  • the person's facial data may refer to the position coordinates of the facial features of the person's face
  • the facial expression of the person may refer to a change in the expression of the face when the person feels the psychological activities such as joy, happiness, anger, and the like.
  • step 103 the facial expression of the person is converted into visual data.
  • the visual data includes a plurality of image information that can indicate a person's state, such as a "smiley face” pattern indicating a thank-you, or an "exclamation mark” pattern indicating a reminder; and/or a plurality of text information indicating a person's state. For example, the text "Thank you” that expresses gratitude or the text "Please note” indicating the reminder.
  • step 104 the visual data is displayed.
  • the displaying the visual data may include displaying image information indicating a state of a person and/or a plurality of text information indicating a state of a person at a plurality of display positions simultaneously and/or separately.
  • This embodiment can be applied to drivers and passengers, and can quickly and conveniently display the current driving to the surrounding drivers.
  • the status of the passengers can also be applied to the participants in the meeting, so that the meeting host or the lecturer can keep abreast of the status of the participants, such as whether they are interested in the content of the meeting.
  • the displaying the visual data may simultaneously or separately display image information indicating a state of a person and/or text information indicating a state of a person, and the display position may include a window, a rear window, a roof, and a body surrounding
  • the different display locations may display the same or different visual data.
  • the visible data may be erased at the display position; or the visible data may be kept displayed until the next visible data arrives.
  • the method before converting the facial expression of the person into visual data, the method further includes: receiving a voice instruction from a person, identifying the voice instruction, and recognizing a facial expression of the person represented by the voice instruction, It is determined whether the facial expression of the person indicated by the voice instruction is consistent with the facial expression of the recognized person. If they are consistent, the facial expression of the person is converted into visual data; if not, the facial data of the person is continuously collected.
  • the content of the voice instruction may be content related to the current facial expression input by the person, thereby verifying the correctness of the facial expression recognition by voice instructions.
  • the displaying the visual data comprises: displaying the visual data in a display position corresponding to the button operation according to the operation of the button by the user.
  • the button 1 corresponds to the rear window
  • the button 2 corresponds to the left body
  • the terminal displays the visual data in the rear window
  • the terminal displays the visual data.
  • the personnel status is transmitted to other vehicle drivers or passengers.
  • collecting facial data of the person and the recognizing the facial expression of the person may be performed by a terminal with a camera, such as a mobile terminal of the person, or a facial expression capturing device.
  • FIG. 2 is a schematic diagram of an implementation process of collecting facial data of a person in the embodiment.
  • the method for collecting facial data of a person includes the following steps.
  • step 201 the terminal tracks the facial features of the person.
  • step 202 the terminal compares the facial features with the standard coordinates, and determines whether the coordinates of the facial features position fall within the standard coordinate range. If yes, step 203 is performed, and if no, step 201 is performed.
  • the terminal uses The standard coordinate range is preset in the terminal, and the standard coordinate is obtained by a large amount of statistics.
  • the statistician appears various facial expressions
  • the facial features may change slightly, for example, when the statistician has a smiling expression.
  • the corners of the mouth will rise slightly.
  • the standard coordinates can be obtained by counting the changes in the facial features when a large number of statistician's expressions appear.
  • the terminal starts running and performs coordinate judgment on the facial features, as long as the facial features are located. If the coordinate falls within the standard coordinate range, the face of the captured person can be determined. If the coordinate of the facial features falls outside the standard coordinate range, it can be determined that the face of the person is not captured, and the terminal needs to be re-required. Track the five-person position of the person.
  • step 203 the terminal collects facial data of the person.
  • the facial data can be used to identify a facial expression of a person.
  • FIG. 3 is a schematic diagram of a process for realizing a facial expression of a person according to a person's face data in the embodiment.
  • a method for recognizing a facial expression of a person according to a person's facial data includes the following steps.
  • step 301 the facial features position coordinates are determined based on the collected facial data of the person.
  • step 302 the facial features position coordinates are compared with the facial expression five-position position coordinates, and it is determined whether the facial features position coordinates fall within the facial expression position coordinate of the expression template. If yes, step 303 is performed, and if no, step 301 is performed.
  • the facial expression five-point position coordinates used are preset in the terminal, and when the terminal starts running, as long as the terminal is running The position coordinate falls within the critical value of the facial feature position coordinate of the expression template, so that the corresponding facial expression in the expression template is determined by the person, and if the facial feature position coordinates fall outside the critical value of the facial feature position coordinate of the expression template, The judger does not appear the corresponding facial expression in the expression template, and needs to re-collect the facial data of the person to determine the position of the facial features.
  • step 303 the facial expression of the collected person is identified.
  • the facial expressions of the person that can be identified can include: a happy facial expression, a willing facial expression, an angry facial expression.
  • the embodiment further provides a device for displaying the state of the person.
  • the device for displaying the state of the person in this embodiment may include: an acquisition unit 41, an identification list. Element 42, display unit 43.
  • the acquisition unit 41 is arranged to collect facial data of a person.
  • the recognition unit 42 is configured to recognize a facial expression of the person based on the facial data of the person and convert the facial expression of the person into visual data.
  • the display unit 43 is arranged to display the visual data.
  • the acquisition unit 41, the identification unit 42, in the device for displaying the state of the person shown in FIG. 4 can be installed in an independent hardware device and/or in a terminal with a camera.
  • the display unit 43 can be installed at a position including a window, a rear window, a roof, a vehicle body, and the like; the number of display positions is selected by the user, and the different display positions can display visual data of the same or different contents.
  • the display unit 43 may further include a direction button for displaying the visual data in a display position corresponding to the button operation according to the operation of the button by the user; for example, the button 1 corresponds to the rear window, and the button 2 corresponds to the left body.
  • the button 1 corresponds to the rear window
  • the button 2 corresponds to the left body.
  • the display unit 43 may further be configured to erase the visible data at the display position after the displayed visual data reaches the set time.
  • FIG. 5 is a schematic structural diagram of the identification unit 42 in the embodiment. As shown in FIG. 5, in the embodiment, the structure of the identification unit 42 may include: a face data identification module 51, a face data conversion output module 52, and a visual data transmission module 53. .
  • the facial data recognition module 51 is configured to recognize facial data of a person collected by the acquisition unit and convert it into a facial expression of a person.
  • the facial data conversion output module 52 is arranged to convert the facial expressions of the person into visual data.
  • a visual data transmission module 53 is arranged to transmit the visual data to the display unit.
  • the apparatus in the embodiment shown in FIG. 4 may further include: a voice recognition unit and a determination unit.
  • the voice recognition unit is configured to receive a voice command, identify the voice command, and identify the The facial expression of the person represented by the voice command.
  • the determining unit is configured to determine whether the facial expression of the person represented by the voice instruction is consistent with the facial expression of the person recognized by the recognition unit, and if they are consistent, trigger the recognition unit to convert the facial expression of the person into visual data. If not, trigger the acquisition unit to continue collecting facial data of the person.
  • the voice command can come from a person, that is, the person completes the input.
  • the device for displaying the state of the person in the embodiment may be periodically executed, and the length of the periodic execution may be preset in the device by a user, such as a person; or, the setting may be started after the terminal receives the button command.
  • the facial data of the person is collected, and the start time of the device is manually controlled by a person.
  • the embodiment further provides a computer readable storage medium storing computer executable instructions for performing the above method.
  • FIG. 6 is a schematic diagram showing the hardware structure of an electronic device according to the embodiment. As shown in FIG. 6, the electronic device includes: one or more processors 610 and a memory 620. One processor 610 is taken as an example in FIG.
  • the electronic device may further include: an input device 630 and an output device 640.
  • the processor 610, the memory 620, the input device 630, and the output device 640 in the electronic device may be connected by a bus or other means, and the connection through the bus is taken as an example in FIG.
  • the input device 630 can receive input numeric or character information
  • the output device 640 can include a display device such as a display screen.
  • the memory 620 is a computer readable storage medium that can be used to store software programs, computer executable programs, and modules.
  • the processor 610 performs various functional applications and data processing by executing software programs, instructions, and modules stored in the memory 620 to implement any of the above embodiments.
  • the memory 620 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the electronic device, and the like.
  • the memory may include volatile memory such as random access memory (RAM), and may also include non-volatile memory such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
  • Memory 620 can be a non-transitory computer storage medium or a transitory computer storage medium.
  • the non-transitory computer storage medium such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 620 can optionally include memory remotely located relative to processor 610, which can be connected to the electronic device over a network. Examples of the above networks may include the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • Input device 630 can be used to receive input digital or character information and to generate key signal inputs related to user settings and function controls of the electronic device.
  • the output device 640 can include a display device such as a display screen.
  • the electronic device of the present embodiment may further include a communication device 650 that transmits and/or receives information over a communication network.
  • a person skilled in the art can understand that all or part of the process of implementing the above embodiment method can be completed by executing related hardware by a computer program, and the program can be stored in a non-transitory computer readable storage medium.
  • the program when executed, may include the flow of an embodiment of the method as described above, wherein the non-transitory computer readable storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), or a random access memory (RAM). Wait.
  • the present disclosure provides a method and apparatus for presenting a person's status, which can quickly and conveniently present the status of the current person to the surrounding driver without the driver having to remember.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种展示人员状态的方法和装置,该方法包括:采集人员的面部数据;根据所述人员的面部数据识别所述人员的面部表情;将所述人员的面部表情转换为可视数据;显示所述可视数据。

Description

展示人员状态的方法及装置 技术领域
本公开涉及信息显示技术,例如涉及展示人员状态的方法及装置。
背景技术
对人员状态展示的方案可以包括:安装显示屏,通过数据线与控制电路连接,控制电路设置在操作者可触摸到的地方,控制电路包括按钮、喇叭和单片机,按钮或按钮的组合作为单片机的输入信号。单片机芯片内预设好与按钮或按钮的组合来对应的语句,通过按钮选择不同显示内容。使用场景包括:当人员妨碍到别人时,可以按下某一个按钮,显示屏会显示道歉语或可爱表情,简单操作便可缓解生活中的纠纷。
然而,上述人员状态展示方案需要操作者手动操作,比较繁琐,如果应用于驾驶员,可能会引起一定的行车安全隐患。此外,通过按钮及按钮的组合来预置一些对应的简单信息,实现方式比较原始,存在误操作的可能,且对于使用人员学习和记忆的成本较高。
发明内容
有鉴于此,本公开提供了一种展示人员状态的方法及装置,能够快捷方便的向周边驾驶员展示当前人员的状态,且无需驾驶员进行记忆。
本实施例提供了一种展示人员状态的方法,包括:采集人员的面部数据;据所述人员的面部数据识别所述人员的面部表情;将所述人员的面部表情转换为可视数据;以及显示所述可视数据。
可选地,采集人员的面部数据包括:追踪人员的五官位置;将所述五官位置与标准坐标进行比较;当确定所述五官位置落在标准坐标范围内时,则采集人员的面部数据。
可选地,所述根据所述人员的面部数据识别人员的面部表情包括:根据采集到的人员的面部数据确定五官位置坐标,将五官位置坐标与表情模板五官位置坐标进行比较;当确定在所述五官位置坐标位于表情模板五官位置坐标临界值内时,则识别采集到的人员的面部表情。
可选地,所述显示所述可视数据包括以下至少一种方式:将可视数据显示在车身一个或多个位置;和根据按键操作将可视数据显示在与所述按键操作对应的显示位置。
可选地,所述将所述人员的面部表情转换为可视数据之前还包括:对接收的语音指令进行识别,识别出所述语音指令所表示的人员的面部表情,判断所述语音指令所表示的人员的面部表情与根据人员的面部表情识别出的人员的面部表情是否一致,如果一致,则将人员的面部表情转换为可视数据;如果不一致,则继续采集人员的面部数据。
可选地,所述采集人员的面部数据包括:周期性采集人员的面部数据;或者,收到启动指令后开始所述采集人员的面部数据。
本实施例还提供了一种展示人员状态的装置,包括:采集单元,识别单元,显示单元。采集单元设置为采集人员的面部数据;识别单元设置为根据所述人员的面部数据识别人员的面部表情,并将所述人员的面部表情转换为可视数据;显示单元设置为显示所述可视数据。
可选地,所述识别单元包括:面部数据识别模块,面部数据转换输出模块,可视数据传输模块。
面部数据识别模块设置为识别所述采集单元采集的人员面部数据,转换成人员的面部表情;面部数据转换输出模块设置为将所述人员的面部表情转换成可视数据;可视数据传输模块设置为将所述可视数据传输至显示单元。
可选地,所述装置还包括语音识别单元和判断单元。
所述语音识别单元设置为接收语音指令,对语音指令进行识别,识别出所述语音指令所表示的人员的面部表情;所述判断单元设置为判断语音指令所表示的人员的面部表情与所述识别单元识别出的人员的面部表情是否一致,如果 一致,则触发所述识别单元将人员的面部表情转换为可视数据;如果不一致,则触发采集单元继续采集人员的面部数据。
可选地,所述显示单元还包括按键,设置为根据按键操作将可视数据显示在对应于所述按键操作的显示位置。
本实施例提供的展示人员状态方法及装置,使其他人员能够根据所显示的可视数据,判断当前人员的状态,不需要人员进行过多组合关系的记忆,学习成本低。
当本实施例应用于驾乘人员时,能够快捷方便的向周边驾驶员展示当前人员的状态,避免一些类型的交通事故的发生,并能够提升行车文明。本实施例还能够在多种类型车辆中利用已有的终端设备实现,实现简单方便,适用范围更广。
本实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述方法。
本实施例还提供一种电子设备,该电子设备包括一个或多个处理器、存储器以及一个或多个程序,所述一个或多个程序存储在存储器中,当被一个或多个处理器执行时,执行上述方法。
本实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述任意一种方法。
附图说明
图1为本实施例中展示人员状态方法的实现流程示意图。
图2为本实施例中采集人员面部数据的实现流程示意图。
图3为本实施例中根据人员面部数据识别人员面部表情实现流程示意图。
图4为本实施例中展示人员状态的装置结构示意图。
图5为本实施例中识别单元结构示意图。
图6为本实施例电子设备的通用硬件结构示意图。
具体实施方式
为了能够更加详尽地了解本公开的特点与技术内容,下面结合附图对本实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明。在不冲突的情况下,以下实施例和实施例中的技术特征可以相互组合。
图1为本实施例中展示人员状态方法的实现流程示意图,如图1所示,本实施例中,展示人员状态方法包括以下步骤。
在步骤101中,采集人员的面部数据。
面部数据可以包括人员面部五官位置数据。
可选地,在步骤101开始之前,该方法还可以包括:设置周期性执行的时间长度;所述周期性执行的时间长度可由用户预先输入到终端中,终端接收设置操作并进行存储;也可由终端自行设置,如设置为5分钟,终端以所述时间长度为间隔周期性采集人员的面部数据。此外,终端还可以接收人员输入的启动指令,收到该启动指令后终端开始工作,采集人员的面部数据,这样可以由操作人员手动控制终端的开始运行时间。
在步骤102中,根据所述人员的面部数据识别所述人员的面部表情。
可选地,所述人员面部数据可以是指人员面部五官的位置坐标,所述人员面部表情可以是指人员在感受喜悦、高兴、愤怒等心理活动时,面部呈现出的表情变化。
在步骤103中,将所述人员的面部表情转换为可视数据。
可选地,所述的可视数据包括多种可以表示人员状态的图像信息,例如表示感谢的“笑脸”图案,或者表示提醒的“感叹号”图案;和/或多种表示人员状态的文字信息,例如表示感谢的文字“谢谢”或者表示提醒的文字“请注意”。
在步骤104中,显示所述可视数据。
可选地,所述显示所述可视数据可以包括:同时和/或分别在多个显示位置显示表示人员状态的图像信息和/或多种表示人员状态的文字信息。
本实施例可以应用于驾乘人员,能够快捷方便的向周边驾驶员展示当前驾 乘人员的状态;也可以应用于会议中的与会人员,使会议主持或主讲及时了解与会人员的状态,如对会议内容是否感兴趣等。
在实际应用中,所述显示所述可视数据可以同时或分别显示表示人员状态的图像信息和/或表示人员状态的文字信息,显示位置可以包括车窗、后车窗、车顶、车身周围,所述不同显示位置可以显示相同或者不相同的可视数据。
可选地,所述可视数据的显示达到设定时间后,在显示位置可以擦除所述可视数据;也可以一直保持显示所述可视数据,直至下一条可视数据到达。
可选地,所述将所述人员的面部表情转换为可视数据之前还可以包括:接收来自人员的语音指令,对语音指令进行识别,识别出所述语音指令所表示的人员的面部表情,判断语音指令所表示的人员的面部表情与识别出的人员的面部表情是否一致,如果一致,则将人员的面部表情转换为可视数据;如果不一致,则继续采集人员的面部数据。所述语音指令的内容可以为人员输入的与当前面部表情有关的内容,从而通过语音指令来检验面部表情识别的正确性。
可选地,所述显示所述可视数据包括:根据用户对按键的操作将可视数据显示在对应于按键操作的显示位置。例如:按键1对应于后车窗,按键2对应于左车身,用户对按键1操作时,则终端将可视数据显示在后车窗,用户对按键2操作时,则终端将可视数据显示在左车身,从而将人员状态传递给其它车辆驾驶者或乘坐人员。
以上方法中,采集所述人员的面部数据和所述识别所述人员面部表情可由带有摄像头的终端完成,如所述人员的移动终端、或者面部表情捕捉装置等。
图2为本实施例中采集人员面部数据的实现流程示意图,如图2所示,本实施例中,采集人员面部数据的方法包括以下步骤。
在步骤201中,终端追踪人员的五官位置。
在步骤202中,终端将所述五官位置与标准坐标进行比较,判断五官位置的坐标是否落在标准坐标范围内,如果是,执行步骤203,如果否,执行步骤201,
可选地,终端在判断五官位置的坐标是否落在标准坐标范围内时,使用的 所述标准坐标范围是预先设定在所述终端中,所述标准坐标由大量统计得到,被统计者在出现各种表情时五官位置会发生小幅度变化,例如被统计者出现微笑的表情时,嘴角会有轻微上扬,通过对大量被统计者的表情出现时的五官位置变化统计可以得到所述标准坐标,当所述终端开始运行,进行对五官位置的坐标判断时,只要所述五官位置坐标落在所述标准坐标范围内,即可确定捕捉到人员的面部,如果所述五官位置坐标落在所述标准坐标范围之外,即可确定未捕捉到人员的面部,需要所述终端重新追踪人员的五官位置。
在步骤203中,终端采集人员的面部数据。
可选地,所述面部数据可用来识别人员面部表情。
图3为本实施例中根据人员面部数据识别人员面部表情实现流程示意图,如图3所示,本实施例中,根据人员面部数据识别人员面部表情的方法包括以下步骤。
在步骤301中,根据采集到的人员的面部数据确定五官位置坐标。
在步骤302中,将五官位置坐标与表情模板五官位置坐标进行比较,判断五官位置坐标是否落在表情模板五官位置坐标临界值内,如果是,执行步骤303,如果否,执行步骤301。
可选地,判断五官位置坐标是否落在表情模板五官位置坐标临界值内时,使用的所述表情模板五官位置坐标是预先设定在所述终端中,当所述终端开始运行时,只要五官位置坐标落在所述表情模板五官位置坐标临界值内,即可判定人员出现了表情模板中的对应面部表情,如果所述五官位置坐标落在所述表情模板五官位置坐标临界值外,则可判定人员未出现表情模板中的对应面部表情,需要重新采集人员的面部数据确定五官位置坐标。
在步骤303中,识别采集到的人员的面部表情。
可选地,可被识别采集到的人员面部表情可以包括:高兴的面部表情、感谢的面部表情、愤怒的面部表情。
为实现图1所示的方法,本实施例还提供了一种展示人员状态的装置,如图4所示,本实施例中的展示人员状态的装置可以包括:采集单元41,识别单 元42,显示单元43。
采集单元41设置为采集人员的面部数据。
识别单元42设置为根据所述人员的面部数据识别人员的面部表情,并将所述人员的面部表情转换为可视数据。
显示单元43设置为显示所述可视数据。
可选地,图4所示的展示人员状态的装置中的采集单元41,识别单元42,可安装于独立硬件装置内和/或运行于带有摄像头的终端内。
显示单元43可安装在包括车窗、后车窗、车顶、车身周围等位置;显示位置数量由用户自行选择,所述不同的显示位置可以显示相同或者不相同内容的可视数据。
显示单元43还可以包括方向按键,用于根据用户对按键的操作将可视数据显示在对应于所述按键操作的显示位置;例如:按键1对应于后车窗,按键2对应于左车身,用户对按键1操作时,则可视数据在后车窗显示,用户对按键2操作时,则可视数据在左车身显示,从而将人员状态传递给其它车辆驾驶者或乘坐人员。
所述显示单元43,还可以设置为显示的可视数据达到设定时间后,在显示位置擦除所述可视数据。
图5为本实施例中识别单元42结构示意图,如图5所示,本实施例中,识别单元42结构可以包括:面部数据识别模块51,面部数据转换输出模块52,可视数据传输模块53.
面部数据识别模块51设置为识别所述采集单元采集的人员的面部数据,转换成人员的面部表情。
面部数据转换输出模块52设置为将所述人员的面部表情转换成可视数据。
可视数据传输模块53设置为将所述可视数据传输至显示单元。
在实际应用中,图4所示的本实施例中所述装置还可以包括:语音识别单元和判断单元。
所述语音识别单元设置为接收语音指令,对语音指令进行识别,识别出所 述语音指令所表示的人员的面部表情。
所述判断单元设置为判断语音指令所表示的人员的面部表情与所述识别单元识别出的人员的面部表情是否一致,如果一致,则触发所述识别单元将人员的面部表情转换为可视数据;如果不一致,则触发采集单元继续采集人员的面部数据。
所述语音指令可以来自人员,即由人员完成输入。
在实际应用中,本实施例中展示人员状态的装置可以周期性执行,所述周期性执行的时间长度可由用户、如人员预先设置在所述装置中;或者,设置在终端接收按键指令后开始采集人员的面部数据,由人员手动控制所述装置的开始运行时间。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。
本实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述方法。
图6是根据本实施例的一种电子设备的硬件结构示意图,如图6所示,该电子设备包括:一个或多个处理器610和存储器620。图6中以一个处理器610为例。
所述电子设备还可以包括:输入装置630和输出装置640。
所述电子设备中的处理器610、存储器620、输入装置630和输出装置640可以通过总线或者其他方式连接,图6中以通过总线连接为例。
输入装置630可以接收输入的数字或字符信息,输出装置640可以包括显示屏等显示设备。
存储器620作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块。处理器610通过运行存储在存储器620中的软件程序、指令以及模块,从而执行多种功能应用以及数据处理,以实现上述实施例中的任意一种方法。
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器可以包括随机存取存储器(Random Access Memory,RAM)等易失性存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件或者其他非暂态固态存储器件。
存储器620可以是非暂态计算机存储介质或暂态计算机存储介质。该非暂态计算机存储介质,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器620可选包括相对于处理器610远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例可以包括互联网、企业内部网、局域网、移动通信网及其组合。
输入装置630可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置640可包括显示屏等显示设备。
本实施例的电子设备还可以包括通信装置650,通过通信网络传输和/或接收信息。
本领域普通技术人员可理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来执行相关的硬件来完成的,该程序可存储于一个非暂态计算机可读存储介质中,该程序在执行时,可包括如上述方法的实施例的流程,其中,该非暂态计算机可读存储介质可以为磁碟、光盘、只读存储记忆体(ROM)或随机存储记忆体(RAM)等。
工业实用性
本公开提供了一种展示人员状态的方法和装置,能够快捷方便的向周边驾驶员展示当前人员的状态,且无需驾驶员进行记忆。

Claims (11)

  1. 一种展示人员状态的方法,包括:
    采集人员的面部数据;
    根据所述人员的面部数据识别所述人员的面部表情;
    将所述人员的面部表情转换为可视数据;以及
    显示所述可视数据。
  2. 根据权利要求1所述的方法,其中,所述采集人员的面部数据包括:
    追踪人员的五官位置;
    将所述五官位置与预设的标准坐标进行比较;以及
    当确定所述五官位置落在标准坐标范围内时,则采集人员的面部数据。
  3. 根据权利要求1所述的方法,其中,所述根据所述人员的面部数据识别人员的面部表情包括:
    根据采集到的人员的面部数据确定五官位置坐标,将五官位置坐标与表情模板五官位置坐标进行比较;以及
    当确定所述五官位置坐标位于所述表情模板五官位置坐标临界值内时,则识别采集到的人员的面部表情。
  4. 根据权利要求1所述的方法,其中,所述显示所述可视数据包括以下至少一种方式:
    将可视数据显示在车身一个或多个位置;以及
    根据按键操作将可视数据显示在与所述按键操作对应的显示位置。
  5. 根据权利要求1所述的方法,在将所述人员的面部表情转换为可视数据之前还包括:
    对接收的语音指令进行识别,识别出所述语音指令表示的人员的面部表情,判断所述语音指令所表示的人员的面部表情与根据人员的面部数据识别出的人员的面部表情是否一致,如果一致,则将人员的面部表情转换为可视数据;如果不一致,则继续采集人员的面部数据。
  6. 根据权利要求1至5任一所述的方法,其中,所述采集人员的面部数据包括:
    周期性采集人员的面部数据;或者,
    收到启动指令后开始所述采集人员的面部数据。
  7. 一种展示人员状态的装置,包括:采集单元,识别单元和显示单元;其中,所述采集单元设置为采集人员的面部数据;
    所述识别单元设置为根据所述人员的面部数据识别人员的面部表情,并将所述人员的面部表情转换为可视数据;以及
    所述显示单元设置为显示所述可视数据。
  8. 根据权利要求7所述的装置,其中,所述识别单元包括:面部数据识别模块,面部数据转换输出模块和可视数据传输模块;其中,
    面部数据识别模块设置为识别所述采集单元采集的人员面部数据,转换成人员的面部表情;
    面部数据转换输出模块设置为将所述人员的面部表情转换成可视数据;以及
    可视数据传输模块设置为将所述可视数据传输至显示单元。
  9. 根据权利要求7所述的装置,还包括语音识别单元和判断单元,其中,
    所述语音识别单元设置为接收语音指令,对语音指令进行识别,识别出所述语音指令所表示的人员的面部表情;以及
    所述判断单元设置为判断语音指令所表示的人员的面部表情与所述识别单元识别出的人员的面部表情是否一致,如果一致,则触发所述识别单元将人员的面部表情转换为可视数据;如果不一致,则触发采集单元继续采集人员的面部数据。
  10. 根据权利要求7至9任一所述的装置,其中,所述显示单元还包括按键,设置为根据按键操作将可视数据显示在对应于所述按键操作的显示位置。
  11. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1-6任一项的方法。
PCT/CN2017/091319 2016-11-30 2017-06-30 展示人员状态的方法及装置 WO2018099076A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611092822.2 2016-11-30
CN201611092822.2A CN108133166B (zh) 2016-11-30 2016-11-30 一种展示人员状态的方法及装置

Publications (1)

Publication Number Publication Date
WO2018099076A1 true WO2018099076A1 (zh) 2018-06-07

Family

ID=62241207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091319 WO2018099076A1 (zh) 2016-11-30 2017-06-30 展示人员状态的方法及装置

Country Status (2)

Country Link
CN (1) CN108133166B (zh)
WO (1) WO2018099076A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101004791A (zh) * 2007-01-19 2007-07-25 赵力 一种基于二维偏最小二乘法的面部表情识别方法
CN101452582A (zh) * 2008-12-18 2009-06-10 北京中星微电子有限公司 一种实现三维视频特效的方法和装置
CN105551499A (zh) * 2015-12-14 2016-05-04 渤海大学 面向语音与面部表情信号的情感可视化方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355527A (zh) * 2011-07-22 2012-02-15 深圳市无线开锋科技有限公司 一种手机感应心情装置及方法
JP2015067254A (ja) * 2013-10-01 2015-04-13 パナソニックIpマネジメント株式会社 車載機器と、それを搭載した自動車
CN104104867B (zh) * 2014-04-28 2017-12-29 三星电子(中国)研发中心 控制摄像装置进行拍摄的方法和装置
US9576175B2 (en) * 2014-05-16 2017-02-21 Verizon Patent And Licensing Inc. Generating emoticons based on an image of a face
CN105354527A (zh) * 2014-08-20 2016-02-24 南京普爱射线影像设备有限公司 一种消极表情识别鼓励系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101004791A (zh) * 2007-01-19 2007-07-25 赵力 一种基于二维偏最小二乘法的面部表情识别方法
CN101452582A (zh) * 2008-12-18 2009-06-10 北京中星微电子有限公司 一种实现三维视频特效的方法和装置
CN105551499A (zh) * 2015-12-14 2016-05-04 渤海大学 面向语音与面部表情信号的情感可视化方法

Also Published As

Publication number Publication date
CN108133166A (zh) 2018-06-08
CN108133166B (zh) 2023-03-14

Similar Documents

Publication Publication Date Title
WO2021159630A1 (zh) 交通工具通勤控制方法及装置、电子设备、介质和车辆
CN111143925B (zh) 图纸标注方法及相关产品
US9330509B2 (en) Method for obtaining product feedback from drivers in a non-distracting manner
EP3648076B1 (en) Method and apparatus for interacting traffic information, and computer storage medium
WO2003044648A2 (en) Method and apparatus for a gesture-based user interface
TWI597684B (zh) 車輛管控裝置及車輛管控方法
WO2022160678A1 (zh) 轨交驾驶员的动作检测方法及装置、设备、介质及工具
EP3098692A1 (en) Gesture device, operation method for same, and vehicle comprising same
US9983407B2 (en) Managing points of interest
CN202904252U (zh) 利用手势识别技术控制汽车电器的装置
CN114489331A (zh) 区别于按钮点击的隔空手势交互方法、装置、设备和介质
WO2018099076A1 (zh) 展示人员状态的方法及装置
CN111354216A (zh) 一种车辆停放位置的识别方法、装置及相关设备
CN103885580A (zh) 使用手势的车用控制系统和方法
CN111985417A (zh) 功能部件识别方法、装置、设备及存储介质
US20140098998A1 (en) Method and system for controlling operation of a vehicle in response to an image
CN108734262B (zh) 智能设备控制方法、装置、智能设备和介质
CN113721582B (zh) 座舱系统响应效率测试方法、设备、存储介质及装置
CN112579035A (zh) 语音采集终端输入系统及输入方法
CN113392263A (zh) 一种数据标注方法及装置、电子设备和存储介质
CN110580430A (zh) 身份录入方法、装置及系统
CN112446695A (zh) 一种数据处理方法及装置
CN112951216B (zh) 一种车载语音处理方法及车载信息娱乐系统
CN116185190B (zh) 一种信息显示控制方法、装置及电子设备
CN111368590A (zh) 一种情绪识别方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17877350

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17877350

Country of ref document: EP

Kind code of ref document: A1