CN110941196A - Intelligent panel, multi-level interaction method based on angle detection and storage medium - Google Patents

Intelligent panel, multi-level interaction method based on angle detection and storage medium Download PDF

Info

Publication number
CN110941196A
CN110941196A CN201911195349.4A CN201911195349A CN110941196A CN 110941196 A CN110941196 A CN 110941196A CN 201911195349 A CN201911195349 A CN 201911195349A CN 110941196 A CN110941196 A CN 110941196A
Authority
CN
China
Prior art keywords
angle
interaction
user
scene
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911195349.4A
Other languages
Chinese (zh)
Inventor
赵振宇
皮毅明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingluo Intelligent Technology Co Ltd
Original Assignee
Xingluo Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingluo Intelligent Technology Co Ltd filed Critical Xingluo Intelligent Technology Co Ltd
Priority to CN201911195349.4A priority Critical patent/CN110941196A/en
Publication of CN110941196A publication Critical patent/CN110941196A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a multi-level interaction method based on angle detection, which comprises the following steps: a camera of the intelligent panel acquires an environment image around the intelligent panel; the intelligent panel judges whether a human face exists in the environment image; when the intelligent panel judges that the face exists in the environment image, the shooting angle of the face in the environment image is identified; the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval; when the shooting angle is judged to be in a first angle threshold interval, acquiring a current scene and providing a first interactive interface corresponding to the scene for a user; and when the shooting angle is judged to be in the second angle threshold interval, acquiring the current scene and providing a second interactive interface corresponding to the scene for the user. Through the mode, disturbance to the user can be reduced, complexity of user operation is reduced, and human-computer interaction experience is improved.

Description

Intelligent panel, multi-level interaction method based on angle detection and storage medium
Technical Field
The invention relates to the technical field of intelligent home furnishing, in particular to an intelligent panel, a multi-level interaction method based on angle detection and a storage medium.
Background
The intelligent home is embodied in an internet of things manner under the influence of the internet. The intelligent home connects various devices (such as audio and video devices, lighting systems, curtain control, air conditioner control, security systems, digital cinema systems, audio and video servers, video cabinet systems, network home appliances and the like) in the home together through the Internet of things technology, and provides multiple functions and means such as home appliance control, lighting control, telephone remote control, indoor and outdoor remote control, anti-theft alarm, environment monitoring, heating and ventilation control, infrared forwarding, programmable timing control and the like. Compared with the common home, the intelligent home has the traditional living function, integrates the functions of building, network communication, information household appliance and equipment automation, provides an all-around information interaction function, and even saves funds for various energy expenses.
A man-machine interaction mode in the scene of smart home is an important block directly related to user experience. The traditional smart home has two interaction modes, one is man-machine interaction through an APP of a mobile terminal (for example, a smart phone), the operation of the mode is complex, and corresponding home equipment needs to be found layer by layer for control; the other is that the central control device of the smart home automatically executes control over the home devices according to a scene detection result, but the mode seriously disturbs users, for example, the window curtain is not opened by the users at a certain time point but is opened due to scene misdetection or different habits of the users every day, and generally speaking, the two interaction modes bring poor home experience to the users and have adverse effects on popularization and use of smart home products.
Disclosure of Invention
The technical problem mainly solved by the application is to provide the intelligent panel, the multi-level interaction method based on angle detection and the storage medium, which can reduce disturbance to a user, reduce complexity of user operation and improve human-computer interaction experience.
In order to solve the above technical problem, one technical solution adopted in the embodiments of the present application is: a multi-level interaction method based on angle detection is provided, and comprises the following steps: a camera of the intelligent panel acquires an environment image around the intelligent panel; the intelligent panel judges whether a human face exists in the environment image; when the intelligent panel judges that the face exists in the environment image, the shooting angle of the face in the environment image is identified; the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval; when the shooting angle is judged to be in a first angle threshold interval, acquiring a current scene and providing a first interactive interface corresponding to the scene for a user; and when the shooting angle is judged to be in the second angle threshold interval, acquiring the current scene and providing a second interactive interface corresponding to the scene for the user.
The step of identifying the shooting angle of the face in the environment image comprises the following steps: detecting the area of the region where the human face is located; calculating a proportional value of the area of the region where the face is located and a prestored reference area; and calculating the shooting angle according to the proportional value.
The interaction method further comprises the following steps: acquiring facial features of a human face; identifying the identity information of a user corresponding to the face according to the facial features; and searching the corresponding pre-stored reference area in a pre-stored corresponding relation table of the identity information and the reference area according to the identity information.
And the interaction modes corresponding to the first interaction interface and the second interaction interface are different.
The first interactive interface is a voice interactive interface, and the second interactive interface is a touch interactive interface.
And the interactive content of the first interactive interface is different from that of the second interactive interface.
The first interactive interface is a first touch button menu, the second interactive interface is a second touch button menu, and the second touch button menu is a secondary touch button menu corresponding to one touch button in the first touch button menu.
The first interactive interface is a first voice menu, the second interactive interface is a second voice menu, and the second voice menu is a secondary voice menu corresponding to one voice option in the first voice menu.
The interactive content of the first interactive interface is to inquire whether a user executes all the operation items corresponding to the scene by one key, and the interactive content of the second interactive interface is to respectively inquire whether the user executes each operation item corresponding to the scene.
Wherein the multi-level interaction method further comprises: and receiving an operation instruction input by the user at the first interactive interface or the second interactive interface so as to execute corresponding operation.
Wherein the step of acquiring the current scene comprises: the processor acquires the current time; and the processor searches a corresponding scene in a pre-stored corresponding relation table of time and scene according to the current time.
Wherein, the step of obtaining the current scene comprises: the processor acquires at least one of data of indoor intelligent household appliances and data of indoor sensing equipment through the communicator of the intelligent panel; and the processor analyzes the current scene according to at least one of the data of the indoor intelligent household appliance and the data of the indoor sensing equipment.
In order to solve the above technical problem, another technical solution adopted in the embodiment of the present application is: there is provided a smart panel comprising a processor and a memory electrically connected to the processor, the memory for storing a computer program, the processor for invoking the computer program to perform the above method.
In order to solve the above technical problem, another technical solution adopted in the embodiments of the present application is: a storage medium is provided which stores a computer program executable by a processor to implement the above-described method.
According to the embodiment of the application, the camera of the intelligent panel is used for acquiring the environment image around the intelligent panel; the intelligent panel judges whether a human face exists in the environment image; when the intelligent panel judges that the face exists in the environment image, the shooting angle of the face in the environment image is identified; the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval; when the shooting angle is judged to be in a first angle threshold interval, acquiring a current scene and providing a first interactive interface corresponding to the scene for a user; when the shooting angle is judged to be in the second angle threshold interval, the current scene is acquired, and a second interaction interface corresponding to the scene is provided for the user, so that disturbance to the user can be reduced, the complexity of user operation is reduced, and the human-computer interaction experience is improved.
Drawings
Fig. 1 is a schematic hardware structure diagram of an intelligent home control system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a hardware structure of an intelligent panel according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a multi-level interaction method based on angle detection according to a first embodiment of the present application;
FIG. 4 is a flowchart illustrating a multi-level interaction method based on angle detection according to a second embodiment of the present application;
FIG. 5 is a flowchart illustrating a multi-level interaction method based on angle detection according to a third embodiment of the present application;
FIG. 6 is a diagram illustrating a first touch button menu according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating a second touch button menu according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a multi-level interaction method based on angle detection according to a fourth embodiment of the present application;
FIG. 9 is a schematic flow chart diagram illustrating one embodiment of obtaining a current scene according to the present application;
FIG. 10 is a schematic flow chart diagram illustrating another embodiment of the present application for obtaining a current scene;
fig. 11 is a schematic diagram of a hardware structure of an intelligent panel according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The intelligent panel is an important component of an intelligent household control system, intelligent household appliances are main control objects, and facilities related to household life are highly integrated by utilizing a comprehensive wiring technology, a network communication technology, a safety precaution technology, an automatic control technology and an audio and video technology.
The intelligent panel is a central control system integrating a plurality of subsystems such as lighting, sound, curtains, temperature controllers and sensors, and can realize intelligent control management of residential space light, electric curtains, temperature and humidity, household appliances and the like in various intelligent control modes such as remote control, mobile phone remote control, touch interaction, voice interaction and the like, so that intelligent and comfortable high-quality life is provided for people.
The intelligent panel can be embedded into an indoor wall body, and can also be a placing type intelligent panel placed on a desktop.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of an intelligent home control system according to an embodiment of the present application.
In this embodiment, the smart home control system 10 may include a smart panel 11, a smart home device 12, an indoor sensing device 13, and a smart home cloud server 14.
The intelligent panel 11 is in communication connection with the intelligent household appliance 12, the indoor sensor device 13 and the intelligent home cloud server 14, and the connection mode can be a wireless or wired mode, which is not limited in the embodiment of the application. For example, the wireless communication connection may be WIFI, mobile internet, Zigbee, or the like, and the wired communication connection may be RJ45 network cable, USB data cable, or the like.
The intelligent household electrical appliance 12 can be an intelligent television, an intelligent curtain, an intelligent window, an intelligent lock, an intelligent refrigerator, an intelligent air purifier, an intelligent air conditioner and the like, and the embodiment of the application does not limit the intelligent window.
The indoor sensing device 13 may be a visual sensor, such as a light sensor, a camera, or the like, a position detection sensor, such as a radar, a proximity sensor, a hall element, or the like, a touch sensor, such as a humidity sensor, a temperature sensor, or the like, an olfactory sensor, such as an odor sensor, a smoke sensor, an air quality sensor, a detector for a special toxic gas (e.g., a carbon monoxide detection sensor), or the like, which is not limited in this embodiment of the present application, and the indoor sensing device 13 is used to collect real-time data in a room.
Referring to fig. 2, fig. 2 is a schematic diagram of a hardware structure of an intelligent panel according to an embodiment of the present application.
In this embodiment, the smart panel 11 may include a processor 111 and a memory 112 electrically connected to the processor 111, a camera 113, a display 115, a microphone 116, a speaker 117, and a communicator 118.
Referring to fig. 3, fig. 3 is a flowchart illustrating a multi-level interaction method based on angle detection according to a first embodiment of the present application.
In this embodiment, the multi-level interaction method based on angle detection may include the following steps:
step S31: the camera of the intelligent panel acquires an environment image around the intelligent panel.
Wherein, the processor 111 of the smart panel 11 controls the camera 113 to shoot the environment image around the smart panel 11.
Step S32: the intelligent panel judges whether a human face exists in the environment image.
The processor 111 of the smart panel 11 obtains the environment image captured by the camera 113 from the camera 113. The processor 111 determines whether a face is present in the ambient image.
In step S32, when the smart panel determines that a human face is present in the environment image, step S33 is performed.
In step S32, the smart panel returns to step S31 when it determines that there is no human face in the environment image.
Step S33: and recognizing the shooting angle of the face in the environment image.
The shooting angle refers to an included angle between a plane where the face is located and a photosensitive surface of a photosensitive element of the camera, and the plane where the face is located can be defined as a plane where a connecting line between two eyebrows and a nose bridge line are located. For example, the shooting angle of a human face is 0 degrees when the front face is opposed to the camera, and the shooting angle is 90 degrees when the side faces are opposed (the ears are opposed to the camera). In this embodiment, the light-sensing surface of the camera may be substantially parallel to the plane of the smart panel 11, for example, substantially parallel to the touch surface of the display.
Step S34: the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval.
In step S34, step S35 is executed when it is determined that the photographing angle is in the first angle threshold section. Step S36 is executed when it is determined that the photographing angle is in the second angle threshold section.
Step S35: the method comprises the steps of obtaining a current scene and providing a first interactive interface corresponding to the scene for a user.
Step S36: and acquiring a current scene and providing a second interactive interface corresponding to the scene for the user.
When the processor 111 judges that the shooting angle of the face enters the first angle threshold interval, the user passes through the intelligent panel 11 or stays in front of the intelligent panel 11, the shooting angle is large, a first interaction interface corresponding to the current scene is provided for the user, and harassment brought to the user by actively executing operation corresponding to the scene is avoided.
When judging that the shooting angle of the face enters the second angle threshold interval, the processor 111 indicates that the user is approaching the intelligent panel 11, and provides a second interactive interface corresponding to the current scene for the user, so that disturbance caused by actively executing operation corresponding to the scene to the user is avoided, different interactive interfaces are provided according to the face shooting angle, the user can conveniently operate the intelligent panel in different states, and the interactive experience of the user is improved.
The first angle threshold interval may be (45 °, 90 °), and the second angle threshold interval may be [0, 45 ]. Wherein the angle refers to a shooting angle.
Step S37: and receiving an operation instruction input by a user at the first interactive interface or the second interactive interface so as to execute corresponding operation.
The first interactive interface and the second interactive interface can be interactive interfaces of different types of interactive modes, and can also be interactive interfaces of different levels. The present embodiment is not limited to the following description.
Referring to fig. 4, fig. 4 is a flowchart illustrating a multi-level interaction method based on angle detection according to a second embodiment of the present application.
In one embodiment, the first interactive interface and the second interactive interface have different corresponding interactive modes. The first interactive interface is a first type of interactive interface. The second interactive interface is a second type of interactive interface. Type refers to the type of interaction means. For example, the first interactive interface is a voice interactive interface, and the second interactive interface is a touch interactive interface.
Specifically, in this embodiment, the multi-level interaction method based on angle detection may include the following steps:
step S41: the camera of the intelligent panel acquires an environment image around the intelligent panel.
Wherein, the processor 111 of the smart panel 11 controls the camera 113 to shoot the environment image around the smart panel 11.
Step S42: the intelligent panel judges whether a human face exists in the environment image.
The processor 111 of the smart panel 11 obtains the environment image captured by the camera 113 from the camera 113. The processor 111 determines whether a face is present in the ambient image.
In step S42, when the smart panel determines that a human face is present in the environment image, step S43 is performed.
In step S42, the smart panel returns to step S41 when it determines that there is no human face in the environment image.
Step S43: and recognizing the shooting angle of the face in the environment image.
The shooting angle refers to an included angle between a plane where the face is located and a photosensitive surface of a photosensitive element of the camera, and the plane where the face is located can be defined as a plane where a connecting line between two eyebrows and a nose bridge line are located. For example, the shooting angle of a human face is 0 degrees when the front face is opposed to the camera, and the shooting angle is 90 degrees when the side faces are opposed (the ears are opposed to the camera). In this embodiment, the light-sensing surface of the camera may be substantially parallel to the plane of the smart panel 11, for example, substantially parallel to the touch surface of the display.
Step S44: the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval.
In step S44, step S45 is executed when it is determined that the photographing angle is in the first angle threshold section. Step S46 is executed when it is determined that the photographing angle is in the second angle threshold section.
Step S45: and acquiring a current scene and providing a voice interaction interface corresponding to the scene for a user.
Step S46: and acquiring a current scene and providing a touch interactive interface corresponding to the scene for a user.
Step S47: and receiving an operation instruction input by a user at the voice interaction interface or the touch interaction interface so as to execute corresponding operation.
The providing of the voice interaction interface to the user may specifically be: the processor 111 controls the speaker of the smart panel to play the inquiry voice corresponding to the scene according to the scene, and controls the sound pickup of the smart panel to be in a state in which the surrounding sound can be collected.
For example, the smart panel 11 outputs "whether the air purifier needs to be turned on" by voice.
Subsequently, the intelligent panel can also receive the voice command of the user through the sound pick-up, and carry out the operation corresponding to the voice command according to the voice command.
For example, the user inputs "yes" by voice, the smart panel 11 sends an on control command to the air purifier through its communicator 118 to control the air purifier to be turned on.
The providing of the touch interaction interface to the user may specifically be: the processor 111 controls the display 115 of the smart panel 11 to display at least one touch button corresponding to a scene according to the scene and controls an area corresponding to the touch button to be in a state where it can receive a touch instruction.
For example, in connection with the above example, the display 115 of the smart panel 11 pops up a switch touch button that displays the air purifier.
Subsequently, the smart panel may also receive a touch instruction of the user through the touch button, and perform an operation corresponding to the touch button according to the touch instruction.
For example, the user clicks the touch button and the smart panel 11 sends an on control command to the air purifier through its communicator 118 to turn on the air purifier.
In this embodiment, in the process that the user gradually faces the camera of the panel from the side face to the front face, a voice interaction interface is provided first, and then a touch interaction interface is provided, for example, the user faces the intelligent panel 11 through the side face of the intelligent panel, then the speaker of the intelligent panel 11 outputs "do you need to close the window", and if the user answers "yes", the user can complete the operation without turning around the intelligent panel 11; if the user turns around and is just facing to the display of the intelligent panel 11 and is further close to the intelligent panel 11, a touch interaction interface is provided for the user, the operation is simple, and the operation experience of the user is greatly improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a multi-level interaction method based on angle detection according to a third embodiment of the present application.
In this embodiment, the interactive contents of the first interactive interface and the second interactive interface are different. For example, taking the touch interaction manner as an example, the first interaction interface is a first touch button menu, the second interaction interface is a second touch button menu, and the second touch button menu is a secondary touch button menu corresponding to one touch button in the first touch button menu.
Specifically, in this embodiment, the multi-level interaction method for the smart panel may include the following steps:
step S51: the camera of the intelligent panel acquires an environment image around the intelligent panel.
Wherein, the processor 111 of the smart panel 11 controls the camera 113 to shoot the environment image around the smart panel 11.
Step S52: the intelligent panel judges whether a human face exists in the environment image.
The processor 111 of the smart panel 11 obtains the environment image captured by the camera 113 from the camera 113. The processor 111 determines whether a face is present in the ambient image.
In step S52, when the smart panel determines that a human face is present in the environment image, step S53 is performed.
In step S52, the smart panel returns to step S51 when it determines that there is no human face in the environment image.
Step S53: and recognizing the shooting angle of the face in the environment image.
The shooting angle refers to an included angle between a plane where the face is located and a photosensitive surface of a photosensitive element of the camera, and the plane where the face is located can be defined as a plane where a connecting line between two eyebrows and a nose bridge line are located. For example, the shooting angle of a human face is 0 degrees when the front face is opposed to the camera, and the shooting angle is 90 degrees when the side faces are opposed (the ears are opposed to the camera). In this embodiment, the light-sensing surface of the camera may be substantially parallel to the plane of the smart panel 11, for example, substantially parallel to the touch surface of the display.
Step S54: the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval.
In step S54, step S55 is executed when it is determined that the photographing angle is in the first angle threshold section. Step S56 is executed when it is determined that the photographing angle is in the second angle threshold section.
Step S55: the method includes the steps of obtaining a current scene and providing a first touch button menu corresponding to the scene to a user.
Step S56: the method includes the steps of obtaining a current scene and providing a first touch button menu corresponding to the scene to a user.
Step S57: and receiving an operation instruction input by a user in the first touch button menu or the second touch button menu to execute corresponding operation.
Referring to fig. 6 and 7, fig. 6 is a schematic diagram of a first touch button menu according to an embodiment of the present disclosure. Fig. 7 is a schematic diagram of a second touch button menu according to an embodiment of the present disclosure.
In an application scenario, the interactive content of the first interactive interface is to inquire whether a user executes all operation items corresponding to the scenario by one key, and the interactive content of the second interactive interface is to respectively inquire whether the user executes each operation item corresponding to the scenario.
Specifically, taking a touch interaction manner as an example, if the obtained current scene is an away scene, providing a first interaction interface to the user may specifically be: the user is provided with a touch button 61 for entering the away-from-home mode by one key. If the smart panel receives an operation instruction of the user at the first interactive interface, for example, the user touches the touch button 61 of the one-touch entering away mode, the corresponding operation executed by the smart panel may include all operation items in the away mode, for example, the smart panel sends a control signal to the smart lamp to control turning off the lamp, the smart panel sends a control signal to the smart window controller to control turning off the window, and the smart panel sends a control signal to the smart floor sweeping robot to control turning on the smart floor sweeping robot.
The specific examples of providing the second interactive interface to the user may be: the user is provided with a touch button 71 for turning off the window, a touch button 72 for turning off the light, and a touch button 73 for turning on the cleaning robot. And if the user receives the operation instruction at the second interactive interface, executing corresponding operation according to the touch instruction corresponding to the user. For example, if the user touches the touch button 71 for turning off the window, the touch button 72 for turning off the light, and does not touch the touch button 73 for turning on the sweeping robot, the smart panel sends a control signal to the smart lamp to control turning off the light, and the smart panel sends a control signal to the smart window controller to control turning off the window, and the button which is not touched by the user does not execute the corresponding operation.
In this embodiment, the shooting angle that detects the people face diminishes gradually, and the user turns to the in-process that the front face is relative with the intelligent panel from the side face relatively promptly, expandes the operation menu step by step, and the user pays close attention to the intelligent panel more then the display menu is more concrete, provides different interactive interface according to the busy degree of user, reduces the harassment to the user, and easy operation is convenient. For example, in a scene, a user hurried to pass through the intelligent panel at the entrance to catch time to work, and the time for turning to look at the panel is probably short, so that the intelligent panel provides a touch button 61 for entering an away-from-home mode by one key, and at the moment, the user only needs to touch the button to complete all operations; in another scenario, the user slowly passes through the smart panel and stays in front of the smart panel, and slowly turns the face to the smart panel, so that more detailed touch buttons are further provided, for example, the touch button 71 for turning off the window, the touch button 72 for turning off the light, and the touch button 73 for turning on the sweeping robot indicate that the user has time to perform more detailed operations, so that the user can accurately control the home equipment.
Referring to fig. 8, fig. 8 is a flowchart illustrating a multi-level interaction method based on angle detection according to a fourth embodiment of the present application.
In this embodiment, taking voice interaction as an example, the first interactive interface is a first voice menu, the second interactive interface is a second voice menu, and the second voice menu is a secondary voice menu corresponding to one voice option in the first voice menu.
Step S81: the camera of the intelligent panel acquires an environment image around the intelligent panel.
Wherein, the processor 111 of the smart panel 11 controls the camera 113 to shoot the environment image around the smart panel 11.
Step S82: the intelligent panel judges whether a human face exists in the environment image.
The processor 111 of the smart panel 11 obtains the environment image captured by the camera 113 from the camera 113. The processor 111 determines whether a face is present in the ambient image.
In step S82, when the smart panel determines that a human face is present in the environment image, step S83 is performed.
In step S82, the smart panel returns to step S81 when it determines that there is no human face in the environment image.
Step S83: and recognizing the shooting angle of the face in the environment image.
The shooting angle refers to an included angle between a plane where the face is located and a photosensitive surface of a photosensitive element of the camera, and the plane where the face is located can be defined as a plane where a connecting line between two eyebrows and a nose bridge line are located. For example, the shooting angle of a human face is 0 degrees when the front face is opposed to the camera, and the shooting angle is 90 degrees when the side faces are opposed (the ears are opposed to the camera). In this embodiment, the light-sensing surface of the camera may be substantially parallel to the plane of the smart panel 11, for example, substantially parallel to the touch surface of the display.
Step S84: the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval.
In step S84, step S85 is executed when it is determined that the photographing angle is in the first angle threshold section. Step S86 is executed when it is determined that the photographing angle is in the second angle threshold section.
Step S85: the method comprises the steps of acquiring a current scene and providing a first voice menu corresponding to the scene for a user.
Step S86: the method comprises the steps of acquiring a current scene and providing a first voice menu corresponding to the scene for a user.
Step S87: and receiving an operation instruction input by a user in the first voice menu or the second voice menu to execute corresponding operation.
How to acquire the current scene is explained below.
Referring to fig. 9, fig. 9 is a schematic flowchart of an embodiment of the present application for acquiring a current scene.
In this embodiment, the obtaining of the current scene specifically may include the following steps:
step S91: the processor obtains a current time.
The obtaining time may be obtained from the mobile internet, for example, from the smart home cloud server 14, or may be obtained from a local timer, for example, obtaining the timing time from a local crystal oscillator.
Step S92: and the processor searches the corresponding scene in a pre-stored corresponding relation table of time and scene according to the current time.
Optionally, the pre-stored correspondence table between time and scene may be stored, constructed and formed according to habits of users collected in the past. For example, the smart panel 11 records user operation data such as content (for example, types of the smart home appliances that are operated) and corresponding operation time points that are operated each time the user operates the smart panel 11, and sends the operation time points and the operation content to the smart home cloud server 14, the smart home cloud server 14 counts at least one operation content that is most frequently operated and controlled for each time period in a day according to the collected user operation data, and defines each time period as a scene, and associates and stores the scene and the at least one operation content that is most frequently operated and controlled, where each operation content corresponds to at least one interactive interface, so as to form a correspondence table of time-scene-operation content-interactive interface. The intelligent panel 11 may find the corresponding scene and the interactive interface corresponding to the scene in the corresponding relationship table according to the obtained current time.
Figure BDA0002294531310000131
Table-example of correspondence table of one kind of time-scene-manipulation content-interactive interface
The interactive interface, which may also be referred to as a human-computer interactive interface, is an interface of an input/output device for establishing contact between a person and a computer system and exchanging information. In the present embodiment, for the smart panel 11, the interactive interface may be a touch button, a voice interactive interface (including voice input and voice output), a reminder icon, and the like.
The step of providing an interactive interface corresponding to a scene to a user includes: the smart panel 11 provides an interactive interface corresponding to a scene to a user to allow the user to input an instruction through the interactive interface.
In this embodiment, the step of providing an interactive interface corresponding to a scene to a user includes: the processor 111 controls the display of the smart panel 11 to display at least one touch button corresponding to a scene according to the scene.
Optionally, after the step of providing the user with the interactive interface corresponding to the scene, the interactive method further includes: the intelligent panel 11 receives the instruction of the user through the interactive interface and executes corresponding operation according to the instruction.
Specifically, the interactive interface in the first table is a touch button, but the type of the interactive interface is not limited in the embodiment of the present application, and the interactive interface may also be a voice interactive interface or another human-computer interactive interface. Taking the interactive interface as a touch button as an example, the smart panel 11 includes a display 115, the display 115 is a display with a touch function, and the display 115 is electrically connected to the processor 111. The display 115 displays the touch buttons, on one hand, the display 115 provides information corresponding to the touch buttons for a user, for example, a button for turning on or off the air conditioner or other buttons, on the other hand, the processor 111 controls the touch buttons displayed on the display 115 to be in a state capable of receiving an instruction, for example, when the user touches the touch buttons for turning on or off the air conditioner, the intelligent panel 11 receives the instruction for turning on or off the intelligent air conditioner from the user, and sends a corresponding on or off control signal to the intelligent air conditioner according to the instruction, so as to control the air conditioner to be turned on or.
In the above-described embodiments, the interaction interface functions to output information (content displayed by the touch button) to the user on the one hand, and to receive an operation by the user through the touch button on the other hand. It should be understood that the interactive interface may also be used to display content to the user, such as displaying a reminder icon on the display 115, or may be playing a reminder voice to the user, such as in a scene away from home, where the user passes through the smart panel 11, the smart panel 11 displays a reminder with an umbrella or outputs a reminder with an umbrella by voice, in which case the user may not receive an instruction, but simply output the reminder to the user in a single direction.
Referring further to fig. 10, fig. 10 is a schematic flow chart of another embodiment of the present application for acquiring a current scene.
In this embodiment, the obtaining of the current scene specifically may include the following steps:
step S101: the processor obtains at least one of the data of the indoor intelligent household electrical appliance device and the data of the indoor sensing device through the communicator of the intelligent panel.
Step S102: the processor analyzes the current scene according to at least one of the data of the indoor intelligent household electrical appliance and the data of the indoor sensing equipment.
For example, the indoor sensing device 13 includes a smoke sensor, the indoor intelligent household electrical appliance 12 includes an air purifier, the smoke sensor detects that the smoke amount is greater than a set threshold value and the air purifier is in an off state, then the analyzed current scene is "the air purifier needs to be turned on", when the user passes through the intelligent panel 11, the intelligent panel 11 displays a touch button of an air purifier switch, or asks whether the air purifier is turned on by voice, if the user touches the touch button of the air purifier switch or inputs an instruction for turning on the air purifier by voice, the intelligent panel 11 sends a control signal to the intelligent household electrical appliance to control the air purifier to be turned on.
Referring to fig. 11, fig. 11 is a schematic diagram of a hardware structure of an intelligent panel according to another embodiment of the present application.
In the present embodiment, the smart panel 120 includes a processor 121 and a memory 122 electrically connected to the processor 121, the memory 122 is used for storing a computer program, and the processor 121 is used for calling the computer program to execute the method described in any one of the above embodiments.
The embodiment of the present application also provides a storage medium, which stores a computer program, and the computer program can implement the method of any one of the above embodiments when executed by a processor.
The computer program may be stored in the storage medium in the form of a software product, and includes several instructions for causing a device or a processor to execute all or part of the steps of the method according to the embodiments of the present application.
A storage medium is a medium in computer memory for storing some discrete physical quantity. And the aforementioned storage medium may be: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
According to the embodiment of the application, the camera of the intelligent panel is used for acquiring the environment image around the intelligent panel; the intelligent panel judges whether a human face exists in the environment image; when the intelligent panel judges that the face exists in the environment image, the shooting angle of the face in the environment image is identified; the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval; when the shooting angle is judged to be in a first angle threshold interval, acquiring a current scene and providing a first interactive interface corresponding to the scene for a user; when the shooting angle is judged to be in the second angle threshold interval, the current scene is acquired, and a second interaction interface corresponding to the scene is provided for the user, so that disturbance to the user can be reduced, the complexity of user operation is reduced, and the human-computer interaction experience is improved.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (14)

1. A multi-level interaction method based on angle detection is characterized by comprising the following steps:
a camera of an intelligent panel acquires an environment image around the intelligent panel;
the intelligent panel judges whether a human face exists in the environment image or not;
when the intelligent panel judges that a human face exists in the environment image, recognizing the shooting angle of the human face in the environment image;
the intelligent panel judges the size relation between the shooting angle and a first angle threshold interval and a second angle threshold interval, wherein the angle value in the first angle threshold interval is larger than the angle value in the second angle threshold interval;
when the shooting angle is judged to be in a first angle threshold interval, acquiring a current scene and providing a first interactive interface corresponding to the scene for the user;
and when the shooting angle is judged to be in a second angle threshold interval, acquiring a current scene and providing a second interactive interface corresponding to the scene for the user.
2. The multi-level interaction method as claimed in claim 1, wherein the step of identifying the shooting angle of the face in the environment image comprises:
detecting the area of the region where the face is located;
calculating a proportional value of the area of the region where the face is located and a prestored reference area;
and calculating the shooting angle according to the proportion value.
3. The multi-level interaction method of claim 2, further comprising:
acquiring facial features of the human face;
identifying the identity information of the user corresponding to the face according to the facial features;
and searching the corresponding pre-stored reference area in a pre-stored corresponding relation table of the identity information and the reference area according to the identity information.
4. The multi-level interaction method of claim 1, wherein the first interaction interface and the second interaction interface have different corresponding interaction modes.
5. The multi-level interaction method as claimed in claim 4, wherein the first interaction interface is a voice interaction interface, and the second interaction interface is a touch interaction interface.
6. The multi-level interaction method of claim 1, wherein the interaction content of the first interaction interface is different from the interaction content of the second interaction interface.
7. The multi-level interaction method as claimed in claim 6, wherein the first interaction interface is a first touch button menu, the second interaction interface is a second touch button menu, and the second touch button menu is a secondary touch button menu corresponding to one touch button in the first touch button menu.
8. The multi-level interactive mode of claim 6, wherein the first interactive interface is a first voice menu, the second interactive interface is a second voice menu, and the second voice menu is a secondary voice menu corresponding to one voice option in the first voice menu.
9. The multi-level interaction method according to claim 6, wherein the interaction content of the first interaction interface is to inquire whether the user performs all operation items corresponding to the scene by one key, and the interaction content of the second interaction interface is to respectively inquire whether the user performs each operation item corresponding to the scene.
10. The multi-hierarchy interaction method of claim 1, further comprising:
and receiving an operation instruction input by the user at the first interactive interface or the second interactive interface so as to execute corresponding operation.
11. The multi-level interaction method of claim 1, wherein the step of obtaining the current scene comprises:
the processor acquires the current time;
and the processor searches a corresponding scene in a pre-stored corresponding relation table of time and scene according to the current time.
12. The multi-level interaction method of claim 1, wherein the step of obtaining the current scene comprises:
the processor acquires at least one of data of indoor intelligent household appliances and data of indoor sensing equipment through the communicator of the intelligent panel;
and the processor analyzes the current scene according to at least one of the data of the indoor intelligent household appliance and the data of the indoor sensing equipment.
13. A smart panel comprising a processor and a memory electrically connected to the processor, the memory for storing a computer program, the processor for invoking the computer program to perform the method of any one of claims 1-12.
14. A storage medium, characterized in that the storage medium stores a computer program executable by a processor to implement the method of any one of claims 1-12.
CN201911195349.4A 2019-11-28 2019-11-28 Intelligent panel, multi-level interaction method based on angle detection and storage medium Pending CN110941196A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195349.4A CN110941196A (en) 2019-11-28 2019-11-28 Intelligent panel, multi-level interaction method based on angle detection and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195349.4A CN110941196A (en) 2019-11-28 2019-11-28 Intelligent panel, multi-level interaction method based on angle detection and storage medium

Publications (1)

Publication Number Publication Date
CN110941196A true CN110941196A (en) 2020-03-31

Family

ID=69908288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195349.4A Pending CN110941196A (en) 2019-11-28 2019-11-28 Intelligent panel, multi-level interaction method based on angle detection and storage medium

Country Status (1)

Country Link
CN (1) CN110941196A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526890A (en) * 2020-11-30 2021-03-19 星络智能科技有限公司 Intelligent household control method and device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170052514A1 (en) * 2015-08-17 2017-02-23 Ton Duc Thang University Method and computer software program for a smart home system
CN107678649A (en) * 2017-09-27 2018-02-09 深圳市欧瑞博电子有限公司 The method for information display and device of intelligent panel
CN108536027A (en) * 2018-03-30 2018-09-14 百度在线网络技术(北京)有限公司 Intelligent home furnishing control method, device and server
CN108885498A (en) * 2016-03-24 2018-11-23 三星电子株式会社 Electronic device and the in an electronic method of offer information
CN109643167A (en) * 2016-09-30 2019-04-16 英特尔公司 Interactive mode selection based on the detecting distance between user and machine interface
CN109992237A (en) * 2018-01-03 2019-07-09 腾讯科技(深圳)有限公司 Intelligent sound apparatus control method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170052514A1 (en) * 2015-08-17 2017-02-23 Ton Duc Thang University Method and computer software program for a smart home system
CN108885498A (en) * 2016-03-24 2018-11-23 三星电子株式会社 Electronic device and the in an electronic method of offer information
CN109643167A (en) * 2016-09-30 2019-04-16 英特尔公司 Interactive mode selection based on the detecting distance between user and machine interface
CN107678649A (en) * 2017-09-27 2018-02-09 深圳市欧瑞博电子有限公司 The method for information display and device of intelligent panel
CN109992237A (en) * 2018-01-03 2019-07-09 腾讯科技(深圳)有限公司 Intelligent sound apparatus control method, device, computer equipment and storage medium
CN108536027A (en) * 2018-03-30 2018-09-14 百度在线网络技术(北京)有限公司 Intelligent home furnishing control method, device and server

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526890A (en) * 2020-11-30 2021-03-19 星络智能科技有限公司 Intelligent household control method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107678649B (en) Information display method and device of intelligent panel
RU2628558C2 (en) Method and smart terminal handling device
CN105939236A (en) Method and device for controlling intelligent home device
CN110196557B (en) Equipment control method, device, mobile terminal and storage medium
CN106603350B (en) Information display method and device
CN110543159B (en) Intelligent household control method, control equipment and storage medium
CN111812993A (en) Scene linkage control method, device and storage medium
CN110950204A (en) Call calling method based on intelligent panel, intelligent panel and storage medium
CN109188926A (en) A kind of method and apparatus controlling smart home
CN111123723A (en) Grouping interaction method, electronic device and storage medium
CN110647050B (en) Storage medium, intelligent panel and multi-level interaction method thereof
CN111240221A (en) Storage medium, intelligent panel and equipment control method based on intelligent panel
CN111025930A (en) Intelligent home control method, intelligent home control equipment and storage medium
CN110888335A (en) Intelligent home controller, interaction method thereof and storage medium
CN111147935A (en) Control method of television, intelligent household control equipment and storage medium
CN107368044A (en) A kind of real-time control method of intelligent electric appliance, system
CN110941196A (en) Intelligent panel, multi-level interaction method based on angle detection and storage medium
CN110941198A (en) Storage medium, smart panel and power-saving booting method thereof
CN111126163A (en) Intelligent panel, interaction method based on face angle detection and storage medium
CN114826805A (en) Computer readable storage medium, mobile terminal and intelligent home control method
CN110995551A (en) Storage medium, intelligent panel and interaction method thereof
WO2023051643A1 (en) Device control method, related apparatus, and communication system
CN110989378A (en) Intelligent home controller, interaction method thereof and storage medium
CN111009305A (en) Storage medium, intelligent panel and food material recommendation method thereof
CN111600935A (en) Storage medium, interactive device and reminding method based on interactive device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200331