WO2019184745A1 - 智能画框的控制方法、控制系统及计算机可读存储介质 - Google Patents

智能画框的控制方法、控制系统及计算机可读存储介质 Download PDF

Info

Publication number
WO2019184745A1
WO2019184745A1 PCT/CN2019/078495 CN2019078495W WO2019184745A1 WO 2019184745 A1 WO2019184745 A1 WO 2019184745A1 CN 2019078495 W CN2019078495 W CN 2019078495W WO 2019184745 A1 WO2019184745 A1 WO 2019184745A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewer
emotional state
information
identity information
control
Prior art date
Application number
PCT/CN2019/078495
Other languages
English (en)
French (fr)
Inventor
李文波
同关山
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/605,941 priority Critical patent/US11455036B2/en
Publication of WO2019184745A1 publication Critical patent/WO2019184745A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/279Bioelectric electrodes therefor specially adapted for particular uses
    • A61B5/291Bioelectric electrodes therefor specially adapted for particular uses for electroencephalography [EEG]
    • A61B5/293Invasive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • the present disclosure relates to the field of display technologies, and in particular, to a control method, a control system, and a computer readable storage medium for a smart picture frame.
  • iGallery is an emerging family cloud art gallery that includes a selection of art content libraries, an art appreciation trading cloud platform, display terminals that restore original art, and more. Although the IGallery product is at home, it can show the high artistic quality and high-tech surprises, but it lacks some humanized settings.
  • the embodiments of the present disclosure provide a control method, a control system, and a computer readable storage medium for a smart picture frame, which are used to increase the humanized design of the smart picture frame display.
  • a method of controlling a smart picture frame may include determining identity information of a viewer located in front of the smart picture frame; and invoking pre-stored control corresponding to the identity information in the target emotional state according to a preset target emotional state and the identity information And controlling the smart frame to perform corresponding display adjustment according to the invoked control instruction.
  • the method may further include determining a current emotional state of the viewer of the same identity information while viewing the adjusted smart frame; determining whether the current emotional state reaches the target emotional state; If not, changing the control instruction corresponding to the identity information according to the preset multiple candidate control commands corresponding to the target emotional state, and re-controlling the smart frame according to the changed control command The adjustment is displayed to again determine the current emotional state of the viewer of the same identity information; if so, the current control command is stored as a control command corresponding to the identity information.
  • the determining, by the viewer of the same identity information, the current emotional state when viewing the adjusted smart frame may include: acquiring brainwave information of the viewer; Information, determining a brainwave frequency of the viewer; and determining a current emotional state of the viewer according to a correspondence between a predetermined brainwave frequency band division and an emotional state.
  • the determining, by the viewer of the same identity information, the current emotional state when viewing the adjusted smart frame may further include acquiring external appearance information of the viewer; and according to the external The morphological information corrects the determined current emotional state of the viewer.
  • the acquiring external morphological information of the viewer comprises acquiring facial modality information of the viewer; and/or acquiring sound information of the viewer; and/or acquiring the viewer Physical movement information.
  • the determining the identity information of the viewer located in front of the smart picture frame may include acquiring feature information of a viewer located in front of the smart picture frame; determining whether presence or absence is present in each feature information that has been stored. If the feature information matches the feature information, if the identity information corresponding to the feature information is the identity information of the viewer; if not, the feature information is assigned new identity information as the identity information of the viewer.
  • invoking a pre-stored control instruction corresponding to the identity information in the target emotional state may include invoking a pre-stored initial control instruction according to the preset target emotional state and the identity information.
  • the initial control instruction is one of a plurality of candidate control instructions corresponding to the preset target emotional state.
  • a control system for a smart picture frame can include an identity determiner configured to determine identity information of a viewer located in front of the smart frame; an instruction invoker configured to invoke pre-storage based on a preset target emotional state and the identity information a control instruction corresponding to the identity information in the target emotional state; and a display controller configured to control the smart frame to perform corresponding display adjustment according to the invoked control instruction.
  • the system may further include an emotion validator configured to determine a current emotional state of a viewer of the same identity information while viewing the adjusted smart frame; a data processor configured to Determining whether the current emotional state reaches the target emotional state; the memory is configured to store the current control instruction as a control instruction corresponding to the identity information when determining that the current emotional state reaches the target emotional state And an instruction changer configured to, when determining that the current emotional state does not reach the target emotional state, change the identity information corresponding to the plurality of candidate control commands corresponding to the preset target emotional state And a control instruction; wherein the display controller is further configured to re-control the smart picture frame to perform corresponding display adjustment according to the changed control instruction.
  • an emotion validator configured to determine a current emotional state of a viewer of the same identity information while viewing the adjusted smart frame
  • a data processor configured to Determining whether the current emotional state reaches the target emotional state
  • the memory is configured to store the current control instruction as a control instruction corresponding to the identity information when determining that the current emotional state
  • the emotion validator may be configured to determine a current emotional state of a viewer of the same identity information at fixed time intervals.
  • the emotion validator may include an EEG signal collector configured to acquire brainwave information of the viewer; a signal processor configured to determine the location based on the brain wave information The viewer's brainwave frequency; and a data matcher configured to determine the current emotional state of the viewer based on a predetermined correspondence between the brainwave band division and the emotional state.
  • the emotion validator may further include an external form collector configured to acquire external form information of the viewer; and a data modifier configured to determine the pair according to the external form information The current emotional state of the viewer is corrected.
  • the external form collector may include an image capturer configured to acquire facial modality information of the viewer, and/or to acquire body motion information of the viewer; and/or sound waves A sensor configured to acquire sound information of the viewer.
  • a computer readable storage medium stores computer software instructions that, when executed by a processor, cause the processor to perform the methods described above.
  • FIG. 1 is a flowchart of a method for controlling a smart picture frame according to an embodiment of the present disclosure
  • FIG. 2 is an optional subsequent step of the control method of the smart picture frame shown in FIG. 1;
  • FIG 3 is an embodiment of step S201 shown in Figure 2;
  • Figure 4 is an optional subsequent step of the embodiment of step S201 shown in Figure 3;
  • FIG. 5 is another flowchart of a method for controlling a smart picture frame according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of a control system of a smart picture frame according to an embodiment of the present disclosure
  • FIG. 7 is another schematic structural diagram of a control system of a smart picture frame according to an embodiment of the present disclosure.
  • FIG. 8 is an embodiment of an emotion validator of the control system shown in FIG.
  • FIG. 1 is a flowchart of a method for controlling a smart picture frame according to an embodiment of the present disclosure. As shown in FIG. 1, the method may include:
  • S101 Determine identity information of a viewer located in front of the smart picture frame.
  • the control method of the smart picture frame provided by the embodiment of the present disclosure, by identifying different viewer identity information, calling a pre-stored control instruction corresponding to the identity information in the target emotional state, to control the smart picture frame to perform corresponding display adjustment.
  • the use of the display information of the smart picture frame to accurately adjust the emotional state of the viewer to achieve the target emotional state, so that the smart picture frame is more humanized.
  • the target emotional state may be specifically set according to an application scenario of the smart picture frame.
  • the target emotional state in a quieter office location, can be set to soothe calm, so that the viewer's emotional state approaches the soothing calmness by viewing the display information of the smart frame to facilitate work.
  • the target emotional state in the memorial service, can be set to sadness, so that the emotional state of the viewer approaches the sadness due to the display information of the smart frame to conform to the current scene.
  • step S101 needs to be performed to determine the identity information of the current viewer located in front of the smart picture frame.
  • the current viewer may be the first viewer to use the smart frame, or the viewer who has previously used the smart frame. For the viewer who uses the smart frame for the first time, it is necessary to configure the corresponding identification information for the viewer who uses the smart frame for the first time for subsequent identification. For the viewer who has used the smart picture frame, it is necessary to identify the identification information corresponding to the current viewer in the already stored identification information.
  • the step S101 determining the identity information of the current viewer in front of the smart picture frame may include:
  • the feature information is assigned new identification information as the identity information of the viewer.
  • a plurality of candidate control instructions corresponding to the target emotional state are stored in advance, and each candidate control instruction is used to control the intelligence.
  • the frame is adjusted accordingly.
  • the display adjustments controlled by different alternative control commands are different.
  • an alternate control command may be preset in the plurality of alternate control commands as the initial control command.
  • the initial control command corresponds to the target emotional state.
  • the alternative control command may include: a control command corresponding to changing the background screen information to be green, a control instruction corresponding to changing the brightness of the screen, and a control instruction corresponding to displaying the home screen information, Corresponding to some beautiful scenery, such as the vast grassland, the infinite sea, the towering forest, the fine water and other pictures of the control instructions.
  • the identity information is newly assigned identification information
  • the candidate control instruction corresponding to the newly assigned identification information is not stored in advance in the memory.
  • the step S102, according to the set target emotional state and the identity information, and the pre-stored control instruction corresponding to the identity information in the target emotional state may include: invoking a pre-stored initial control instruction in the target emotional state as the identity information.
  • Corresponding control commands For the viewer who has used the smart picture frame, since the alternative control instruction corresponding to the identity information is pre-stored in the memory, the pre-stored alternative control command corresponding to the identity information in the target emotional state can be directly called. In this case, the invoked control command may be either an initial control command or another alternative control command corresponding to the target emotional state.
  • the emotional state of the viewer of the same identity information may be viewed according to the The change to the screen information is changed, for example, from a lost mood to a relaxed mood, from an angry mood to a sad mood, and from a relaxed mood to a sad mood.
  • the control instructions corresponding to the identity information of the same viewer in the target emotional state may be continuously corrected.
  • the control method of the smart picture frame provided according to the embodiment of the present disclosure may further include the optional subsequent steps shown in FIG.
  • FIG. 2 is an optional subsequent step of the control method of the smart picture frame shown in FIG. 1.
  • FIG. 2 is merely an example of implementing control command correction by self-learning. After step S103 as shown in FIG. 1, the following steps as shown in FIG. 2 may be further included:
  • step S202 determining whether the current emotional state reaches the target emotional state; if not, proceeding to step S203; if yes, executing step S205;
  • step S204 according to the changed control command, re-control the smart picture frame to perform corresponding display adjustment; then return to step S201;
  • control method may end.
  • control command corresponding to the identity information in the target emotional state can be continuously corrected, so that the emotional state of the viewer corresponding to the identity information can be effectively adjusted to reach the target emotional state.
  • the determining, by the S202, whether the current emotional state reaches the target emotional state may include: determining whether the current emotional state is consistent with the target emotional state, or an indicator of the target emotional state.
  • the target emotional state is to ease the mood. In this case, if the current emotional state detected is easy or pleasant, it indicates that the current emotional state has reached the target emotional state, so that the display information of the smart picture frame need not be adjusted, that is, the current display information of the smart picture frame is maintained.
  • the step S203 may include: Among the plurality of alternative control commands, an alternate control command different from the current control command corresponding to the identity information is randomly or sequentially selected as the control command corresponding to the changed identity information.
  • the step S201 determines that the current emotional state of the viewer of the same identity information when viewing the adjusted smart frame may be that the smart frame is controlled to perform corresponding display adjustment in step S103. Thereafter, or after step S204 re-controls the smart picture frame for corresponding display adjustment, the interval fixed time period is performed, for example, after 5 minutes, to leave the viewer with a sufficient emotional state change time.
  • FIG. 3 is an embodiment of step S201 shown in FIG. 2.
  • the foregoing step S201 determines that the current emotional state of the viewer of the same identity information when viewing the adjusted smart frame may include:
  • the viewer's brainwave information changes as the viewer's mood changes, ie, the inner activity changes.
  • Various waveform information of brain waves can be intuitively acquired in the above step S301.
  • the brain wave frequency of the viewer can be obtained.
  • Table 1 below different brainwave frequency bands correspond to different mood states of the viewer. Therefore, the correspondence between the brain wave frequency band and the emotional state can be set in advance, and then the current emotional state of the viewer is determined according to the determined brain wave frequency of the viewer.
  • the viewer's brain wave frequency belongs to the Beta band to the B band, it can be judged that the viewer's mood is relatively tight.
  • the viewer's brain wave frequency belongs to the AlPha band to the A band, it can be judged that the viewer is in a relatively relaxed state.
  • the above step S201 may further include the method in FIG. 4 based on the step shown in FIG. The steps are shown to correct the determined current emotional state such that the determined current emotional state is more accurate.
  • Figure 4 is an alternative subsequent step of the embodiment of step S201 shown in Figure 3.
  • the current emotional state of the viewer who determines the same identity information in step S201 while viewing the adjusted smart frame may further include:
  • the external state information of the viewer acquired according to the above step S401 can intuitively map the emotional state of the viewer.
  • the brain wave information can be supplemented by the external shape information, and the similar emotions can be accurately identified, so that the final determined viewer's current emotional state is more accurate. For example, if it is determined according to the brain wave information that the current emotional state of the viewer is an easy state, and the acquired external form information is that the viewer is smiling, the current emotional state of the viewer may be further refined into a pleasant state.
  • external form information reflecting the emotional state of the viewer may be various, such as a facial expression of a person, a body language of a person, a volume of conversation of a person, and audio. Based on this, the step S401 acquires the external form information of the viewer, which may include:
  • Obtaining the facial modality information of the viewer specifically obtaining the facial modality information of the viewer through an image acquiring device such as a camera, for example, information that the viewer's face is in a crying state; and/or,
  • Obtaining the voice information of the viewer specifically obtaining information such as the volume and audio of the viewer through a device such as a microphone or an acoustic wave sensor, for example, obtaining information such as a large volume of the viewer and a high audio; and/or,
  • the body motion information of the viewer is obtained.
  • the body motion language of the viewer can be obtained by using an infrared sensor or a camera, for example, information such as a state in which the viewer's limb is in a fast motion can be acquired.
  • FIG. 5 is another flowchart of a method for controlling a smart picture frame according to an embodiment of the present disclosure. As shown in FIG. 5, the control method may include the following steps:
  • S501 Determine identity information of a viewer located in front of the smart picture frame.
  • the interval setting duration is, for example, 5 minutes
  • S506. Determine, according to brain wave information, a viewer's brain wave frequency
  • S507. Determine a current emotional state of the viewer according to a preset correspondence between the brain wave frequency band and the emotional state.
  • step S508 determining whether the current emotional state reaches the target emotional state; if not, proceeding to step S509; if yes, executing step S511;
  • step S510 according to the changed control command, re-control the smart picture frame to perform corresponding display adjustment; then return to step S504;
  • control method may end.
  • an embodiment of the present disclosure further provides a control system for a smart picture frame. Since the principle of solving the problem of the system is similar to the control method of the foregoing smart picture frame, the implementation of the system can be referred to the implementation of the control method, and the repeated description is not repeated.
  • FIG. 6 is a schematic structural diagram of a control system of a smart picture frame according to an embodiment of the present disclosure. As shown in FIG. 6, the control system may include:
  • An identity determiner 601 configured to determine identity information of a viewer located in front of the smart picture frame
  • the command invoker 602 is configured to invoke a pre-stored control instruction corresponding to the identity information in the target emotional state according to the preset target emotional state and identity information;
  • the display controller 603 is configured to control the smart picture frame to perform corresponding display adjustment according to the invoked control command.
  • the identity determiner 601, the instruction invoker 602, and the display controller 603 can be integrated into a smart frame simultaneously.
  • the instruction invoker 602 and the display controller 603 can be integrated into the smart picture frame at the same time, and the identity determiner 601 is separately disposed in the device outside the smart picture frame.
  • the display controller 603 can be integrated into the smart picture frame, and the identity determiner 601 and the command invoker 602 can be disposed in a device external to the smart picture frame. This article does not limit this.
  • FIG. 7 is another schematic structural diagram of a control system of a smart picture frame according to an embodiment of the present disclosure.
  • the control system as shown in FIG. 7 may further include:
  • An emotion validator 604 configured to determine a current emotional state of a viewer of the same identity information while viewing the adjusted smart frame
  • a data processor 605 configured to determine whether a current emotional state reaches a target emotional state
  • the memory 606 is configured to store the current control instruction as a control instruction corresponding to the identity information when determining that the current emotional state reaches the target emotional state;
  • the command changer 607 is configured to, when determining that the current emotional state has not reached the target emotional state, change the control instruction corresponding to the identity information according to the plurality of candidate control commands corresponding to the preset target emotional state.
  • the display controller 603 is configured to control the smart picture frame to perform corresponding display adjustment according to the invoked control instruction, and may be configured to re-control the smart picture frame according to the changed control instruction. Display adjustments.
  • the emotion validator 604 can be configured to determine the current emotional state of the viewer of the same identity information at fixed time intervals (eg, every 5 minutes).
  • the emotion validator 604, the data processor 605, the instruction changer 607, the memory 606, and the display controller 603 constitute a loop body that performs related operations in a logical loop as described above until the data processor 605 determines the same The current emotional state of the viewer of the identity information reaches the target emotional state.
  • FIG. 8 is an embodiment of an emotion validator 604 of the control system shown in FIG. As shown in FIG. 8, the emotion validator 604 can include:
  • the EEG signal collector 614 is configured to acquire brainwave information of the viewer
  • the signal processor 624 is configured to determine a brainwave frequency of the viewer according to the brain wave information
  • the data matcher 634 is configured to determine a current emotional state of the viewer according to a correspondence between the preset brain wave band division and the emotional state.
  • the EEG signal 614 may include an external electrode or an internal electrode embedded in the cortex of the brain. The disclosure is not limited herein.
  • the emotion validator 604 may further include:
  • the external shape collector 644 is configured to acquire external shape information of the viewer
  • the data modifier 654 is configured to correct the determined current emotional state of the viewer based on the external form information.
  • the external form collector 644 may include:
  • the image capturer 644a is configured to acquire facial modality information of the viewer; and/or acquire body motion information of the viewer;
  • the acoustic wave sensor 644b is configured to acquire sound information of the viewer.
  • the image capturer 644a can be a camera.
  • Embodiments of the present disclosure also provide a computer readable storage medium storing computer software instructions that, when executed by a processor, cause a processor to perform control of a smart picture frame as previously described method.
  • the embodiments of the present disclosure may be implemented by hardware, or may be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a USB flash drive, a mobile hard disk, etc.). A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods of various embodiments of the present disclosure.
  • a computer device which may be a personal computer, server, or network device, etc.
  • modules in the apparatus in the embodiments may be distributed in the apparatus of the embodiment according to the description of the embodiments, or the corresponding changes may be located in one or more apparatuses different from the embodiment.
  • the modules of the above embodiments may be combined into one module, or may be further split into multiple sub-modules.
  • the control method, the control system and the computer readable medium of the above-mentioned smart picture frame provided by the embodiments of the present disclosure, by identifying different viewer identity information, calling a pre-stored control instruction corresponding to the identity information in the target emotional state to control
  • the intelligent picture frame performs corresponding display adjustment, and realizes the effect of accurately adjusting the emotional state of the viewer to the target emotional state by using the display information of the smart picture frame, so that the smart picture frame is more humanized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Neurosurgery (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种智能画框的控制方法、控制系统及计算机可读存储介质。所述方法包括:确定位于所述智能画框前方的观看者的身份信息(S101);根据预设的目标情绪状态和所述身份信息,调用预先存储的在所述目标情绪状态下所述身份信息对应的控制指令(S102);根据调用的所述控制指令,控制智能画框进行相应的显示调整(S103)。

Description

智能画框的控制方法、控制系统及计算机可读存储介质 技术领域
本公开涉及显示技术领域,尤其涉及一种智能画框的控制方法、控制系统及计算机可读存储介质。
背景技术
随着经济的快速发展,越来越多的家庭开始关注艺术文化消费,追求美学表达和实现,而绘画与影像作品是目前唯一未被数字化的领域。智能画框(iGallery)是新兴的家庭云艺术馆,它包含精选艺术内容库、艺术欣赏交易云平台、能还原艺术原作的显示终端以及更多附加服务。虽然IGallery产品放在家中,能显示出高格艺术品质和高科技带来的惊喜,但还缺少些许人性化的设置内容。
发明内容
有鉴于此,本公开实施例提供了一种智能画框的控制方法、控制系统及计算机可读存储介质,用以增加智能画框显示的人性化设计。
根据本公开的一个方面,提供了一种智能画框的控制方法。该方法可以包括确定位于所述智能画框前方的观看者的身份信息;根据预设的目标情绪状态和所述身份信息,调用预先存储的在所述目标情绪状态下所述身份信息对应的控制指令;以及根据调用的所述控制指令,控制智能画框进行相应的显示调整。
在一个实施例中,该方法还可以包括确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态;确定所述当前情绪状态是否达到所述目标情绪状态;若否,则根据预设的所述目标情绪状态对应的多个备选控制指令,变更所述身份信息对应的控制指令,并根据变更后的控制指令,重新控制所述智能画框进行相应的显示调整,以便再次确定同一所述身份信息的观看者的当前情绪状态;若是,将当前的控制指令作为所述身份信息对应的控制指令进行存储。
在一个实施例中,所述确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态可以包括:获取所述观看者的脑电波信息;根据所述脑电波信息,确定所述观看者的脑波频率;以 及根据预先设定的脑波频段划分与情绪状态的对应关系,确定所述观看者的当前情绪状态。
在一个实施例中,所述确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态还可以包括获取所述观看者的外部形态信息;以及根据所述外部形态信息,对确定出的所述观看者的当前情绪状态进行修正。
在一个实施例中,所述获取所述观看者的外部形态信息包括获取所述观看者的面部情态信息;和/或,获取所述观看者的声音信息;和/或,获取所述观看者的肢体动作信息。
在一个实施例中,所述确定位于所述智能画框前方的观看者的身份信息可以包括获取位于智能画框前方的观看者的特征信息;确定在已经存储的各特征信息中是否存在与观看者的特征信息相匹配的特征信息;若是,则确定该特征信息对应的标识信息为观看者的身份信息;若否,则为该特征信息分配新的标识信息,作为该观看者的身份信息。
在一个实施例中,根据预设的目标情绪状态和所述身份信息,调用预先存储的在所述目标情绪状态下所述身份信息对应的控制指令可以包括调用预先存储的初始控制指令。
在一个实施例中,所述初始控制指令是所述预设的目标情绪状态对应的多个备选控制指令之一。
根据本公开的另一个方面,提供了一种智能画框的控制系统。所述系统可以包括身份确定器,被配置为确定位于所述智能画框前方的观看者的身份信息;指令调用器,被配置为根据预设的目标情绪状态和所述身份信息,调用预先存储的在所述目标情绪状态下所述身份信息对应的控制指令;以及显示控制器,被配置为根据调用的所述控制指令,控制所述智能画框进行相应的显示调整。
在一个实施例中,该系统还可以包括情绪确认器,被配置为确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态;数据处理器,被配置为确定所述当前情绪状态是否达到所述目标情绪状态;存储器,被配置为在确定所述当前情绪状态达到所述目标情绪状态时,将当前的控制指令作为所述身份信息对应的控制指令进行存储;以及指令变更器,被配置为在确定所述当前情绪状态未达到所述目标情绪状态时,根据预设的所述目标情绪状态对应的多个 备选控制指令,变更所述身份信息对应的控制指令;其中,所述显示控制器还被配置为根据变更后的控制指令,重新控制所述智能画框进行相应的显示调整。
在一个实施例中,所述情绪确认器可以被配置为以固定时间间隔确定同一所述身份信息的观看者的当前情绪状态。
在一个实施例中,所述情绪确认器可以包括脑电信号采集器,被配置为用于获取所述观看者的脑电波信息;信号处理器,被配置为根据所述脑电波信息,确定所述观看者的脑波频率;以及数据匹配器,被配置为根据预先设定的脑波频段划分与情绪状态的对应关系,确定所述观看者的当前情绪状态。
在一个实施例中,所述情绪确认器还可以包括外部形态采集器,被配置为获取所述观看者的外部形态信息;以及数据修正器,被配置为根据所述外部形态信息,对确定出的所述观看者的当前情绪状态进行修正。
在一个实施例中,所述外部形态采集器可以包括图像捕获器,被配置为获取所述观看者的面部情态信息,和/或,获取所述观看者的肢体动作信息;和/或,声波传感器,被配置为获取所述观看者的声音信息。
根据本公开的又一方面,提供了一种计算机可读存储介质。该计算机可读存储介质存储了计算机软件指令,所述计算机可执行指令在被处理器执行时使得处理器可以执行如前所述的方法。
附图说明
图1为本公开实施例提供的智能画框的控制方法的一种流程图;
图2为图1中所示的智能画框的控制方法的可选后续步骤;
图3为图2中所示的步骤S201的一个实施例;
图4为图3中所示的步骤S201的所述实施例的可选后续步骤;
图5为本公开实施例提供的智能画框的控制方法的另一种流程图;
图6为本公开实施例提供的智能画框的控制系统的一种结构示意图;
图7为本公开实施例提供的智能画框的控制系统的另一种结构示意图;以及
图8为图7中所示的控制系统的情绪确认器的一个实施例。
具体实施方式
为了使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开作进一步地详细描述,显然,所描述的实施例仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本公开保护的范围。
图1为本公开实施例提供的一种智能画框的控制方法的一种流程图。如图1所示,所述方法可以包括:
S101、确定位于智能画框前方的观看者的身份信息;
S102、根据预设的目标情绪状态和身份信息,调用预先存储的在目标情绪状态下身份信息对应的控制指令;
S103、根据调用的控制指令,控制智能画框进行相应的显示调整。
本公开实施例提供的上述智能画框的控制方法,通过识别不同的观看者身份信息,调用预先存储的在目标情绪状态下该身份信息对应的控制指令,以控制智能画框进行相应的显示调整,实现利用智能画框的显示信息准确调节观看者的情绪状态以达到目标情绪状态的效果,使智能画框更加具有人性化。
在一个实施例中,目标情绪状态可以根据智能画框的应用场景而具体设定。例如,在较为安静的办公地点,目标情绪状态可以设定为舒缓平静,使观看者的情绪状态因观看智能画框的显示信息而趋近于舒缓平静,以利于工作。又如,在追思会,目标情绪状态可以设定为悲伤,使观看者的情绪状态因观看智能画框的显示信息而趋近于悲伤,以符合当前场景。
在实际应用时,同一智能画框可以被多个观看者使用,不同的观看者的情绪状态针对相同的显示信息会有不同的反应变化。因此,在控制智能画框进行显示调整时,应该根据不同的观看者进行有针对性的调整。基于此,在本公开实施例提供的上述控制方法中,首先需要执行步骤S101确定位于智能画框前方的当前观看者的身份信息。
当前观看者有可能是首次使用智能画框的观看者,也有可能是之前使用过智能画框的观看者。针对首次使用智能画框的观看者,需要 为首次使用智能画框的观看者配置与其对应的标识信息,以便后续身份识别使用。针对已经使用过智能画框的观看者,需要在已经存储的标识信息中,识别当前观看者对应的标识信息。
基于此,在本公开实施例提供的上述智能画框的控制方法中,步骤S101确定位于智能画框前方的当前观看者的身份信息,可以包括:
首先,获取位于智能画框前方的观看者的特征信息,例如面部特征信息;
之后,在已经存储的各特征信息中,确定是否存在与观看者的特征信息相匹配的特征信息;
若是,则确定该特征信息对应的标识信息为观看者的身份信息;
若否,则为该特征信息分配新的标识信息,作为该观看者的身份信息。
在本公开实施例提供的上述智能画框的控制方法中,对于特定的目标情绪状态,预先会存储与该目标情绪状态对应的多个备选控制指令,每个备选控制指令用于控制智能画框进行相应的显示调整。不同的备选控制指令所控制的显示调整不同。在一个实施例中,针对该目标情绪状态,可以在多个备选控制指令中预先设定一个备选控制指令作为初始控制指令。在这种情况下,该初始控制指令对应于该目标情绪状态。因此,对于所有新分配的标识信息(即,所有新的身份信息)而言,这些新分配的标识信息(这些新的身份信息)都对应于同一初始控制指令。例如,如果该特定的目标情绪状态为舒缓情绪,则备选控制指令可以包括:对应变更背景画面信息为绿色的控制指令,对应变更画面的亮度的控制指令,对应显示家庭画面信息的控制指令,对应显示一些美丽风景,诸如辽阔草原、无边大海,高耸山林、细淙流水等画面的控制指令等。
对于首次使用的观看者而言,其身份信息为新分配的标识信息,在存储器中并没有预先存储与该新分配的标识信息对应的备选控制指令。如此,步骤S102根据设定的目标情绪状态和身份信息调用预先存储的在目标情绪状态下身份信息对应的控制指令可以包括,调用预先存储的在目标情绪状态下的初始控制指令,作为该身份信息对应的控制指令。对于已经使用过智能画框的观看者,由于在存储器中预先存储了与其身份信息对应的备选控制指令,所以可以直接调用预先存储 的在目标情绪状态下身份信息对应的备选控制指令。在这种情况下,调用的控制指令既有可能是初始控制指令,也有可能是与目标情绪状态对应的其他备选控制指令。
在本公开实施例提供的上述智能画框的控制方法中,在执行步骤S103根据调用的控制指令,控制智能画框进行相应的显示调整之后,同一身份信息的观看者的情绪状态可能会根据观看到的画面信息的变更而改变,例如从失落情绪变更为轻松情绪,从愤怒情绪变更为悲伤情绪,从轻松情绪变更为悲伤情绪。为了使同一观看者的当前情绪状态更加符合目标情绪状态,可以对在目标情绪状态下同一观看者的身份信息对应的控制指令进行不断的修正。为此,根据本公开实施例提供的智能画框的控制方法可以在图1的方法的基础上进一步包括图2中所示的可选后续步骤。图2为图1中所示的智能画框的控制方法的可选后续步骤。图2仅仅是一种通过自我学习的方式实现控制指令修正的示例。在如图1中所示的步骤S103之后,还可以进一步包括如图2所示的以下步骤:
S201、确定同一身份信息的观看者在观看调整后的智能画框时的当前情绪状态;
S202、确定当前情绪状态是否达到目标情绪状态;若否,则执行步骤S203;若是,则执行步骤S205;
S203、根据预设的目标情绪状态对应的多个备选控制指令,变更该身份信息对应的控制指令;
S204、根据变更后的控制指令,重新控制智能画框进行相应的显示调整;之后返回步骤S201;
S205、将当前的控制指令作为该身份信息对应的控制指令进行存储。
如图2所示,在步骤S205之后,控制方法可以结束。
通过不断地循环重复上述步骤S201至步骤S204,可以对在目标情绪状态下该身份信息对应的控制指令不断修正,从而可以有效地将该身份信息对应的观看者的情绪状态调整为达到目标情绪状态。
在一个实施例中,在本公开实施例提供的上述控制方法中,上述S202确定当前情绪状态是否达到目标情绪状态可以包括:确定当前情绪状态是否与目标情绪状态一致,或在目标情绪状态的指标范围内。 例如目标情绪状态为舒缓心情。在这种情况下,若检测到的当前情绪状态为轻松或愉悦,则说明当前情绪状态已经达到目标情绪状态,从而无需再调整智能画框的显示信息,即保持智能画框当前的显示信息。
在一个实施例中,在本公开实施例提供的上述控制方法中,上述步骤S203根据预设的目标情绪状态对应的多个备选控制指令,变更该身份信息对应的控制指令,可以包括:在多个备选控制指令中,随机或依序选取与该身份信息对应的当前控制指令不同的一个备选控制指令,作为变更后的该身份信息对应的控制指令。
在本公开实施例提供的上述控制方法中,上述步骤S201确定同一身份信息的观看者在观看调整后的智能画框时的当前情绪状态,可以是在步骤S103控制智能画框进行相应的显示调整之后,或步骤S204重新控制智能画框进行相应的显示调整之后,间隔固定时间段例如间隔5分钟之后执行,以留给观看者充分的情绪状态变化时间。
图3为图2中所示的步骤S201的一个实施例。如图3所示,上述步骤S201确定同一身份信息的观看者在观看调整后的智能画框时的当前情绪状态,可以包括:
S301、获取观看者的脑电波信息;
S302、根据脑电波信息,确定观看者的脑波频率;
S303、根据预先设定的脑波频段划分与情绪状态的对应关系,确定观看者的当前情绪状态。
观看者的脑电波信息随着观看者的心情即内心活动的变化而变化。在上述步骤S301中可以直观地获取到脑电波的各种波形信息。通过上述步骤S302对脑电波的波形信息进行算法分析,可以得到观看者的脑波频率。从下表1可以看出,不同的脑电波频段对应观看者的不同心情状态。因此可以预先设定脑电波频段与情绪状态的对应关系,之后根据确定出的观看者的脑波频率,来确定观看者的当前情绪状态。
例如,当检测到观看者的脑波频率属于Beta波段至B波段时,可判断出观看者的情绪比较紧张。当检测到观看者的脑波频率属于AlPha波段至A波段时,可以判断出观看者处于比较轻松的状态。
Figure PCTCN2019078495-appb-000001
表1
在本公开实施例提供的上述控制方法中,由于仅根据脑电波信息来判断观看者的心情会存在一定的偏差,上述步骤S201可以在图3所示的步骤的基础上进一步包括图4中所示的步骤,以便对所确定的当前情绪状态进行修正,从而使所确定的当前情绪状态更加准确。
图4为图3中所示的步骤S201的所述实施例的可选后续步骤。在例如步骤S303之后,如图4所示,步骤S201中的确定同一身份信息的观看者在观看调整后的智能画框时的当前情绪状态还可以包括:
S401、获取观看者的外部形态信息;
S402、根据外部形态信息,对确定出的观看者的当前情绪状态进行修正。
根据上述步骤S401获取的观看者的外部形态信息可以直观地映射出观看者的情绪状态。通过外部形态信息可以对脑电波信息进行补充,可以准确地辨别出相似情绪,使最终确定出的观看者的当前情绪状态更准确。例如,如果根据脑电波信息确定出观看者的当前情绪状态为轻松状态,同时获取到的外部形态信息是观看者在微笑,那么可以将观看者的当前情绪状态进一步细化为愉悦状态。
在本公开实施例提供的上述控制方法中,反映观看者的情绪状态的外部形态信息可以有多种,例如人的面部表情,人的肢体语言,人的谈话音量和音频等。基于此,步骤S401获取观看者的外部形态信息,可以包括:
获取观看者的面部情态信息,具体可以通过摄像头等图像获取设备获取观看者的面部情态信息,例如可以获取到观看者的面部处于哭 泣状态等信息;和/或,
获取观看者的声音信息,具体可以通过麦克风或声波传感器等设备获取观看者的音量和音频等信息,例如可以获取到观看者的音量较大且音频较高等信息;和/或,
获取观看者的肢体动作信息,具体可以通过红外传感器或摄像头等设备获取观看者的肢体动作语言,例如可以获取到观看者的肢体处于快速运动的状态等信息。
通过对上述各种具体外部形态信息的获取,再配合脑电波信息,可以准确地识别出观看者的当前情绪状态。图5为本公开实施例提供的智能画框的控制方法的另一种流程图。如图5所示,所述控制方法可以包括以下步骤:
S501、确定位于智能画框前方的观看者的身份信息;
S502、根据设定的目标情绪状态和身份信息,调用预先存储的在目标情绪状态下该身份信息对应的控制指令;
S503、根据调用的控制指令,控制智能画框进行相应的显示调整;
S504、间隔设定时长,例如5分钟;
S505、获取同一身份信息的观看者的脑电波信息;
S506、根据脑电波信息,确定观看者的脑波频率;
S507、根据预先设定的脑波频段划分与情绪状态的对应关系,确定观看者的当前情绪状态;
S508、确定当前情绪状态是否达到目标情绪状态;若否,则执行步骤S509;若是,则执行步骤S511;
S509、在预设的目标情绪状态对应的多个备选控制指令中,选取与该身份信息对应的当前控制指令不同的一个备选控制指令,作为变更后的该身份信息对应的控制指令;
S510、根据变更后的控制指令,重新控制智能画框进行相应的显示调整;之后返回步骤S504;
S511、将当前的控制指令作为身份信息对应的控制指令进行存储。
在步骤S511之后,所述控制方法可以结束。
基于同一发明构思,本公开实施例还提供了一种智能画框的控制系统。由于该系统解决问题的原理与前述一种智能画框的控制方法相似,因此该系统的实施可以参见控制方法的实施,重复之处不再赘述。
图6为本公开实施例提供的智能画框的控制系统的一种结构示意图。如图6所示,所述控制系统可以包括:
身份确定器601,被配置为确定位于智能画框前方的观看者的身份信息;
指令调用器602,被配置为根据预设的目标情绪状态和身份信息,调用预先存储的在目标情绪状态下身份信息对应的控制指令;
显示控制器603,被配置为根据调用的控制指令,控制智能画框进行相应的显示调整。
在一个实施例中,身份确定器601、指令调用器602和显示控制器603可以同时集成于智能画框内。在另一个实施例中,指令调用器602和显示控制器603可以同时集成于智能画框内,而身份确定器601单独设置于智能画框外部的设备中。在又一个实施例中,显示控制器603可以集成于智能画框内,而身份确定器601和指令调用器602可以设置于智能画框外部的设备中。本文对此不做限定。
图7为本公开实施例提供的智能画框的控制系统的另一种结构示意图。除了如图6中所示的各种部件(即,身份确定器601、指令调用器602和显示控制器603)之外,如图7所示中所示的控制系统还可以包括:
情绪确认器604,被配置为确定同一身份信息的观看者在观看调整后的智能画框时的当前情绪状态;
数据处理器605,被配置为确定当前情绪状态是否达到目标情绪状态;
存储器606,被配置为在确定当前情绪状态达到目标情绪状态时,将当前的控制指令作为身份信息对应的控制指令进行存储;以及
指令变更器607,被配置为在确定当前情绪状态未达到目标情绪状态时,根据预设的目标情绪状态对应的多个备选控制指令,变更身份信息对应的控制指令。
在一个实施例中,显示控制器603除了被配置为根据调用的控制指令控制智能画框进行相应的显示调整之外,还可以被配置为根据变更后的控制指令,重新控制智能画框进行相应的显示调整。
根据本公开,情绪确认器604可以被配置为以固定时间间隔(例如,每5分钟)确定同一身份信息的观看者的当前情绪状态。如此, 情绪确认器604、数据处理器605、指令变更器607、存储器606、和显示控制器603构成一个循环体,它们按照如前所述的逻辑循环执行相关操作,直至数据处理器605确定同一身份信息的观看者的当前情绪状态达到目标情绪状态为止。
图8为图7中所示的控制系统的情绪确认器604的一个实施例。如图8所示,情绪确认器604可以包括:
脑电信号采集器614,被配置为获取观看者的脑电波信息;
信号处理器624,被配置为根据脑电波信息,确定观看者的脑波频率;
数据匹配器634,被配置为根据预先设定的脑波频段划分与情绪状态的对应关系,确定观看者的当前情绪状态。
在一个实施例中,脑电信号采集器614可以包括外部电极也可以包括内嵌于脑皮层的内部电极,本公开在此不做限定。
在本公开实施例提供的上述控制系统中,如图8所示,情绪确认器604还可以包括:
外部形态采集器644,被配置为获取观看者的外部形态信息;
数据修正器654,被配置为根据外部形态信息,对确定出的观看者的当前情绪状态进行修正。
在本公开实施例提供的上述控制系统中,如图8所示,外部形态采集器644可以包括:
图像捕获器644a,被配置为获取观看者的面部情态信息;和/或,获取观看者的肢体动作信息;
和/或,声波传感器644b,被配置为获取观看者的声音信息。
所述图像捕获器644a可以是摄像头。
本公开的实施例还提供了一种计算机可读存储介质,其存储了计算机软件指令,所述计算机可执行指令在被处理器执行时使得处理器可以执行如前所述的智能画框的控制方法。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本公开实施例可以通过硬件实现,也可以借助软件加必要的通用硬件平台的方式来实现。基于这样的理解,本公开实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令 用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例的方法。
本领域技术人员可以理解附图只是一个优选实施例的示意图,附图中的模块或流程并不一定是实施本公开所必须的。
本领域技术人员可以理解实施例中的装置中的模块可以按照实施例描述进行分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
本公开实施例提供的上述智能画框的控制方法、控制系统及计算机可读介质,通过识别不同的观看者身份信息,调用预先存储的在目标情绪状态下该身份信息对应的控制指令,以控制智能画框进行相应的显示调整,实现利用智能画框的显示信息准确调节观看者的情绪状态达到目标情绪状态的效果,使智能画框更加具有人性化设计。
显然,本领域的技术人员可以对本公开进行各种改动和变型而不脱离本公开的精神和范围。这样,倘若本公开的这些修改和变型属于本公开权利要求及其等同技术的范围之内,则本公开也意图包含这些改动和变型在内。

Claims (15)

  1. 一种智能画框的控制方法,包括:
    确定位于所述智能画框前方的观看者的身份信息;
    根据预设的目标情绪状态和所述身份信息,调用预先存储的在所述目标情绪状态下所述身份信息对应的控制指令;以及
    根据调用的所述控制指令,控制智能画框进行相应的显示调整。
  2. 如权利要求1所述的控制方法,还包括:
    确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态;
    确定所述当前情绪状态是否达到所述目标情绪状态;
    若否,则根据预设的所述目标情绪状态对应的多个备选控制指令,变更所述身份信息对应的控制指令,并根据变更后的控制指令,重新控制所述智能画框进行相应的显示调整,以便再次确定同一所述身份信息的观看者的当前情绪状态;
    若是,将当前的控制指令作为所述身份信息对应的控制指令进行存储。
  3. 如权利要求2所述的控制方法,其中,所述确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态包括:
    获取所述观看者的脑电波信息;
    根据所述脑电波信息,确定所述观看者的脑波频率;以及
    根据预先设定的脑波频段划分与情绪状态的对应关系,确定所述观看者的当前情绪状态。
  4. 如权利要求3所述的控制方法,其中,所述确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态,还包括:
    获取所述观看者的外部形态信息;以及
    根据所述外部形态信息,对确定出的所述观看者的当前情绪状态进行修正。
  5. 如权利要求4所述的控制方法,其中,所述获取所述观看者的外部形态信息包括:
    获取所述观看者的面部情态信息;和/或,
    获取所述观看者的声音信息;和/或,
    获取所述观看者的肢体动作信息。
  6. 如权利要求1所述的控制方法,其中,所述确定位于所述智能画框前方的观看者的身份信息包括:
    获取位于智能画框前方的观看者的特征信息;
    确定在已经存储的各特征信息中是否存在与观看者的特征信息相匹配的特征信息;
    若是,则确定该特征信息对应的标识信息为观看者的身份信息;
    若否,则为该特征信息分配新的标识信息,作为该观看者的身份信息。
  7. 如权利要求1所述的控制方法,其中,根据预设的目标情绪状态和所述身份信息,调用预先存储的在所述目标情绪状态下所述身份信息对应的控制指令包括:
    调用预先存储的初始控制指令。
  8. 如权利要求7所述的控制方法,其中,所述初始控制指令是所述预设的目标情绪状态对应的多个备选控制指令之一。
  9. 一种智能画框的控制系统,包括:
    身份确定器,被配置为确定位于所述智能画框前方的观看者的身份信息;
    指令调用器,被配置为根据预设的目标情绪状态和所述身份信息,调用预先存储的在所述目标情绪状态下所述身份信息对应的控制指令;以及
    显示控制器,被配置为根据调用的所述控制指令,控制所述智能画框进行相应的显示调整。
  10. 如权利要求9所述的控制系统,还包括:
    情绪确认器,被配置为确定同一所述身份信息的观看者在观看调整后的所述智能画框时的当前情绪状态;
    数据处理器,被配置为确定所述当前情绪状态是否达到所述目标情绪状态;
    存储器,被配置为在确定所述当前情绪状态达到所述目标情绪状态时,将当前的控制指令作为所述身份信息对应的控制指令进行存储;以及
    指令变更器,被配置为在确定所述当前情绪状态未达到所述目标情绪状态时,根据预设的所述目标情绪状态对应的多个备选控制指令,变更所述身份信息对应的控制指令;
    其中,所述显示控制器还被配置为根据变更后的控制指令,重新控制所述智能画框进行相应的显示调整。
  11. 如权利要求10所述的控制系统,其中,所述情绪确认器被配置为以固定时间间隔确定同一所述身份信息的观看者的当前情绪状态。
  12. 如权利要求10所述的控制系统,其中,所述情绪确认器包括:
    脑电信号采集器,被配置为用于获取所述观看者的脑电波信息;
    信号处理器,被配置为根据所述脑电波信息,确定所述观看者的脑波频率;以及
    数据匹配器,被配置为根据预先设定的脑波频段划分与情绪状态的对应关系,确定所述观看者的当前情绪状态。
  13. 如权利要求12所述的控制系统,其中,所述情绪确认器还包括:
    外部形态采集器,被配置为获取所述观看者的外部形态信息;以及
    数据修正器,被配置为根据所述外部形态信息,对确定出的所述观看者的当前情绪状态进行修正。
  14. 如权利要求13所述的控制系统,其中,所述外部形态采集器包括:
    图像捕获器,被配置为获取所述观看者的面部情态信息,和/或,获取所述观看者的肢体动作信息;和/或,
    声波传感器,被配置为获取所述观看者的声音信息。
  15. 一种计算机可读存储介质,其存储了计算机软件指令,所述计算机可执行指令在被处理器执行时使得处理器可以执行如权利要求1至8中任一项所述的方法。
PCT/CN2019/078495 2018-03-30 2019-03-18 智能画框的控制方法、控制系统及计算机可读存储介质 WO2019184745A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/605,941 US11455036B2 (en) 2018-03-30 2019-03-18 Control method of iGallery, control system of iGallery, and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810278095.1 2018-03-30
CN201810278095.1A CN108549483B (zh) 2018-03-30 2018-03-30 一种智能画框的控制方法及控制系统

Publications (1)

Publication Number Publication Date
WO2019184745A1 true WO2019184745A1 (zh) 2019-10-03

Family

ID=63517526

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078495 WO2019184745A1 (zh) 2018-03-30 2019-03-18 智能画框的控制方法、控制系统及计算机可读存储介质

Country Status (3)

Country Link
US (1) US11455036B2 (zh)
CN (1) CN108549483B (zh)
WO (1) WO2019184745A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549483B (zh) * 2018-03-30 2020-08-18 京东方科技集团股份有限公司 一种智能画框的控制方法及控制系统
CN113390170A (zh) * 2021-06-08 2021-09-14 青岛海尔空调器有限总公司 用于控制空调的方法、装置及空调

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205644515U (zh) * 2016-04-29 2016-10-12 长沙三帆信息科技有限公司 一种智能电子画框
CN107172337A (zh) * 2017-06-29 2017-09-15 京东方科技集团股份有限公司 一种智能画框及其中的图像采集装置的切换方法
CN107424019A (zh) * 2017-08-15 2017-12-01 京东方科技集团股份有限公司 基于情绪识别的艺术品推荐方法、装置、介质和电子设备
CN108549483A (zh) * 2018-03-30 2018-09-18 京东方科技集团股份有限公司 一种智能画框的控制方法及控制系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5335961B2 (ja) * 2011-11-28 2013-11-06 シャープ株式会社 表示装置およびテレビ受像機
CN103405239B (zh) * 2013-08-07 2016-01-06 青岛赛博凯尔信息技术有限公司 驾驶创伤后应激障碍虚拟现实治疗系统
US10437332B1 (en) * 2015-10-30 2019-10-08 United Services Automobile Association System and method for emotional context communication
CN106874265B (zh) * 2015-12-10 2021-11-26 深圳新创客电子科技有限公司 一种与用户情绪匹配的内容输出方法、电子设备及服务器
CN106658178B (zh) 2017-01-03 2020-02-07 京东方科技集团股份有限公司 一种显示控制装置及其控制方法
CN106909907A (zh) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 一种视频通讯情感分析辅助系统
US10772551B2 (en) * 2017-05-09 2020-09-15 International Business Machines Corporation Cognitive progress indicator
CN107186728B (zh) * 2017-06-15 2020-02-14 重庆柚瓣家科技有限公司 智能养老服务机器人控制系统
CN107330722A (zh) * 2017-06-27 2017-11-07 昝立民 一种共享设备的广告投放方法
CN107320114B (zh) * 2017-06-29 2020-12-25 京东方科技集团股份有限公司 基于脑电波检测的拍摄处理方法、系统及其设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205644515U (zh) * 2016-04-29 2016-10-12 长沙三帆信息科技有限公司 一种智能电子画框
CN107172337A (zh) * 2017-06-29 2017-09-15 京东方科技集团股份有限公司 一种智能画框及其中的图像采集装置的切换方法
CN107424019A (zh) * 2017-08-15 2017-12-01 京东方科技集团股份有限公司 基于情绪识别的艺术品推荐方法、装置、介质和电子设备
CN108549483A (zh) * 2018-03-30 2018-09-18 京东方科技集团股份有限公司 一种智能画框的控制方法及控制系统

Also Published As

Publication number Publication date
US20210124418A1 (en) 2021-04-29
CN108549483B (zh) 2020-08-18
CN108549483A (zh) 2018-09-18
US11455036B2 (en) 2022-09-27

Similar Documents

Publication Publication Date Title
US11321385B2 (en) Visualization of image themes based on image content
US10083710B2 (en) Voice control system, voice control method, and computer readable medium
WO2021027424A1 (zh) 图像采集的控制方法及采集终端
RU2578210C1 (ru) Способ и устройство для корректировки цвета кожи
CN104049721B (zh) 信息处理方法及电子设备
WO2015143875A1 (zh) 内容呈现方法,内容呈现方式的推送方法和智能终端
WO2015074476A1 (zh) 一种图像处理方法、装置和存储介质
US10015385B2 (en) Enhancing video conferences
EP3075142A1 (en) Shift camera focus based on speaker position
WO2020119032A1 (zh) 基于生物特征的声源追踪方法、装置、设备及存储介质
US11308692B2 (en) Method and device for processing image, and storage medium
WO2020114047A1 (zh) 图像风格迁移及数据存储方法、装置和电子设备
WO2022198934A1 (zh) 卡点视频的生成方法及装置
WO2021043121A1 (zh) 一种图像换脸的方法、装置、系统、设备和存储介质
WO2019184745A1 (zh) 智能画框的控制方法、控制系统及计算机可读存储介质
WO2021143574A1 (zh) 增强现实眼镜、基于增强现实眼镜的ktv实现方法与介质
JP2022518520A (ja) 画像変形の制御方法、装置およびハードウェア装置
US10261749B1 (en) Audio output for panoramic images
WO2021232875A1 (zh) 一种驱动数字人的方法、装置及电子设备
JP2022111966A (ja) 音声及びビデオ会議アプリケーションの音量調節
AU2013222959B2 (en) Method and apparatus for processing information of image including a face
CN105635573B (zh) 摄像头视角调整方法和装置
CN112533070A (zh) 视频声音和画面的调整方法、终端和计算机可读存储介质
CN111185903A (zh) 控制机械臂绘制人像画的方法、装置及机器人系统
CN108171671B (zh) 一种放大眼睛的美型方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19775187

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19775187

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19775187

Country of ref document: EP

Kind code of ref document: A1