CN111476140A - Information playing method and system, electronic equipment, household appliance and storage medium - Google Patents

Information playing method and system, electronic equipment, household appliance and storage medium Download PDF

Info

Publication number
CN111476140A
CN111476140A CN202010250873.3A CN202010250873A CN111476140A CN 111476140 A CN111476140 A CN 111476140A CN 202010250873 A CN202010250873 A CN 202010250873A CN 111476140 A CN111476140 A CN 111476140A
Authority
CN
China
Prior art keywords
playing
expression
information
image
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010250873.3A
Other languages
Chinese (zh)
Inventor
宋德超
陈翀
李孟宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202010250873.3A priority Critical patent/CN111476140A/en
Publication of CN111476140A publication Critical patent/CN111476140A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an information playing method, an information playing system, electronic equipment, household appliances and a storage medium, wherein the information playing method comprises the following steps: acquiring an image of a region to be detected; carrying out face detection on the image of the area to be detected; when a face is detected in the image of the area to be detected, performing expression recognition; and playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information. The obtained image of the area to be detected is subjected to face detection, and expression recognition is carried out when a face is detected in the image of the area to be detected, so that the experience requirements of a user can be sensed; playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information, and bringing good user experience under the current emotion to the user.

Description

Information playing method and system, electronic equipment, household appliance and storage medium
Technical Field
The invention relates to the technical field of smart home, in particular to an information playing method, an information playing system, electronic equipment, household appliances and a storage medium.
Background
With the development of the internet of things and smart homes, household appliances are not limited to traditional applications, and in order to enable the household appliances to bring better user experience, various internet of things household appliances try to reduce direct operations of users, know the requirements of the users through some indirect modes, and are more biased to the role of a family manager. For example, an air conditioner with an internal unit integrating a screen and a sound function can display pictures and play music for a user, but currently, only pictures and music preset by the user can be simply played, and the user's requirements cannot be intelligently sensed.
Disclosure of Invention
The invention provides an information playing method, an information playing system, electronic equipment, household appliances and a storage medium, which can sense the requirements of users, play corresponding information according to the requirements of the users and bring better user experience.
In a first aspect, the present invention provides an information playing method, including:
acquiring an image of a region to be detected;
carrying out face detection on the image of the area to be detected;
when a face is detected in the image of the area to be detected, performing expression recognition;
and playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
According to the embodiment of the present invention, preferably, the information playing method further includes:
and when the playing of the playing information corresponding to the recognized expression is finished, executing the step of acquiring the image of the area to be detected.
According to the embodiment of the present invention, preferably, the information playing method further includes:
and when the playing time of the playing information corresponding to the identified expression reaches the preset time, stopping playing, and executing the step of acquiring the image of the area to be detected.
According to an embodiment of the present invention, preferably, in the information playing method, the playing information corresponding to the identified expression according to the identified expression and a preset corresponding relationship between the expression and the playing information includes:
determining playing information corresponding to the identified expression according to the identified expression and a preset corresponding relation between the expression and the playing information;
searching playing information corresponding to the identified expression from a preset internet searching platform;
and playing the searched playing information.
According to an embodiment of the present invention, preferably, in the information playing method, the playing information corresponding to the identified expression according to the identified expression and a preset corresponding relationship between the expression and the playing information includes:
determining playing information corresponding to the identified expression according to the identified expression and a preset corresponding relation between the expression and the playing information;
searching playing information corresponding to the identified expression from local pre-stored information;
and playing the searched playing information.
According to an embodiment of the present invention, preferably, in the information playing method, the playing information corresponding to the identified expression according to the identified expression and a preset corresponding relationship between the expression and the playing information includes:
determining playing information corresponding to the identified expression according to the identified expression and a preset corresponding relation between the expression and the playing information;
searching playing information corresponding to the identified expression from local pre-stored information;
if the playing information corresponding to the identified expression is not searched from the local pre-stored information, searching the playing information corresponding to the identified expression from a preset internet searching platform;
and playing the searched playing information.
According to an embodiment of the present invention, preferably, in the information playing method, the performing face detection on the image of the region to be detected includes:
and carrying out face detection on the image of the region to be detected by using an Adaboost cascade classifier based on Haar characteristics.
According to an embodiment of the present invention, preferably, in the information playing method, the playing information includes at least one of audio and image.
In a second aspect, the present invention provides an information playing system, including:
the image acquisition module is used for acquiring an image of a region to be detected;
the face detection module is used for carrying out face detection on the image of the area to be detected;
the expression recognition module is used for recognizing the expression when the face is recognized in the image of the area to be detected;
and the information playing module is used for playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
In a third aspect, the present invention provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program implements the information playing method according to the first aspect when executed by the processor.
In a fourth aspect, the invention provides a household appliance comprising the electronic device of the third aspect.
According to an embodiment of the present invention, preferably, the household appliance is an air conditioner.
In a fifth aspect, the present invention provides a storage medium having stored thereon a computer program which, when executed by one or more processors, implements the information playback method according to the first aspect.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
the obtained image of the area to be detected is subjected to face detection, and expression recognition is carried out when a face is detected in the image of the area to be detected, so that the experience requirements of a user can be sensed; playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information, and bringing good user experience under the current emotion to the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of an information playing method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a convolutional neural network according to an embodiment of the present invention;
fig. 3 is a flowchart of an information playing method according to a second embodiment of the present invention;
fig. 4 is a flowchart of another information playing method according to a second embodiment of the present invention;
fig. 5 is a detailed flowchart of step S5 according to the third embodiment of the present invention;
FIG. 6 is another detailed flowchart of step S5 according to the third embodiment of the present invention;
FIG. 7 is another detailed flowchart of step S5 according to the third embodiment of the present invention;
fig. 8 is a block diagram of an information playing system according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, the present embodiment provides an information playing method, which includes the following steps:
and step S1, acquiring the image of the area to be detected.
Specifically, the video of the area to be detected can be acquired in real time through an image acquisition device arranged on the household appliance, the image acquisition device is installed at a position where the face image can be acquired, the image acquisition device can be but is not limited to a camera, the video of the area to be detected is acquired through the camera, and one frame of image in the video of the area to be detected is acquired every second and serves as the image of the area to be detected and is used for face detection.
And step S2, carrying out face detection on the image of the area to be detected.
Preferably, an Adaboost cascade classifier based on Haar features may be adopted to perform face detection on an image of a region to be detected: firstly, extracting Haar characteristics of an image of a region to be detected, then using an Adaboost cascade classifier to detect image blocks of the image of the region to be detected under sliding of windows with different sizes, if a human face is detected, obtaining the position of a window where the human face is located in the image of the region to be detected, obtaining the image blocks of the human face, and suspending obtaining the image in the video of the region to be detected.
Step S3, judging whether a human face is detected in the image of the area to be detected; if a human face is detected in the region image to be detected, step S4 is executed, otherwise, step S1 is executed.
It can be understood that when a face is detected, the face can be subjected to expression recognition, then the household appliance is controlled to play information corresponding to the expression according to the expression recognition result, and if the face is not detected, the image of the area to be detected is continuously acquired to perform face detection.
And step S4, performing expression recognition.
Preferably, when a face is detected in the image of the area to be detected, a convolutional neural network may be used for expression recognition, as shown in fig. 2, the convolutional neural network includes an input layer, a convolutional layer, a pooling layer and a full connection layer (FC1, FC2, FC3), and the convolutional neural network is a network trained by using a large amount of face data with labeled expressions, so that the convolutional neural network can obtain expressions, such as happiness, sadness, anger and the like, on the corresponding face through the input image blocks of the face.
The method comprises the steps of intercepting a face image block according to a face position obtained by face detection, inputting an expression recognition network, wherein the network consists of a convolutional layer, a pooling layer and a full-link layer, and training the network by using a large amount of face data with labeled expressions, so that the convolutional neural network can obtain expressions on a corresponding face through the face image block obtained by the input face detection, such as happiness, sadness, anger and the like, and the expressions can reflect the current emotion of a user.
And step S5, playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
The playing information may include at least one of audio and image. It is to be understood that the audio may be music, which may be a type of music, or a designated certain piece of music, and the image may be a photo, which may be a type of photo, or a designated certain photo, which is not limited in this embodiment.
Taking the playing information as music as an example, the preset corresponding relationship between the expression and the playing information may include, but is not limited to, the following:
the playing information corresponding to the sad expression is the cured music;
the playing information corresponding to the angry expression is laugh music;
the playing information corresponding to the happy expression is the relaxed music.
Taking the playing information as an image, the preset corresponding relationship between the expression and the playing information may include, but is not limited to, the following:
the playing information corresponding to the sad expression is an image of the sad;
playing information corresponding to the angry expression is a laugh image;
the playing information corresponding to the happy expression is a landscape image.
It is understood that the image in this embodiment may be a static image or a dynamic image, and is not limited to the specific example.
In some preferred embodiments, the playing information may also include two kinds of information at the same time, for example, if the playing information includes audio and image, then there may be:
the playing information corresponding to the sad expression is an image of the feelings of injury and music for healing;
the playing information corresponding to the angry expression is a laugh image and laugh music;
the playing information corresponding to the happy expression is landscape images and relaxed music.
It can be understood that images and music can be played simultaneously through the multimedia equipment of the household appliance, such as a display device and an audio playing device, so that after the emotion of the user is recognized and sensed through expression, the experience corresponding to the current emotion can be obtained in vision and hearing, and the requirements of the user are met.
In other preferred embodiments, the playing information may also be a video, and the synchronous playing of the picture and the sound of the video is realized through a multimedia device carried by the household appliance or externally connected to the household appliance.
According to the embodiment, the obtained image of the area to be detected is subjected to face detection, and expression recognition is performed when a face is detected in the image of the area to be detected, so that the experience requirements of a user can be sensed; playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information, and bringing good user experience under the current emotion to the user.
Example two
Because the emotion of the user may change and the playing requirement corresponding to the emotion changes, a certain condition is set to restart the face detection process in the process of playing the playing information corresponding to the identified expression, the latest expression of the user is identified, the playing operation can be updated along with the requirement of the user, the emotion of the user can be adjusted, and the experience quality of the user is improved.
The present embodiment provides an information playing method, based on the first embodiment, in some preferred embodiments, please refer to fig. 3, and the method may further include:
and S6, judging whether the playing information corresponding to the recognized expression is played completely, if so, continuing to execute the step S1 of acquiring the image of the area to be detected, otherwise, executing the step S7.
And step S7, continuing playing, and executing step S6 until the playing is finished.
It can be understood that, taking the playing information as music as an example, the fact that the playing information corresponding to the identified expression is played completely means that the whole music is played completely, and the face detection process is restarted on the condition that the playing information is played completely, so that a better audio-visual effect can be provided for the user.
In practical application, the face detection process may be restarted in advance before the playing is finished, so that the playing content is in smooth transition, and better playing experience is brought to the user. And continuing to play the playing information corresponding to the currently identified expression until the playing is finished, and directly playing the playing information corresponding to the newly identified expression, so that the playing content can be in smooth transition, and a better playing experience is brought to a user, wherein the preset value can be determined according to the time required for executing the steps S1 to S5, for example, the time required for executing the steps S1 to S5 is 5 seconds, the currently identified expression is a sad expression, and the corresponding playing information is cured music with the duration of 5 minutes, when the playing time is 4 minutes to 55 seconds, the step of acquiring the image of the area to be detected is executed, a face detection flow is started in advance, a new expression is identified, the corresponding playing information is determined, and when the playing time is 5 minutes, namely the playing information corresponding to the new expression is directly played.
In some cases, the duration of playing information may be too long, in the long-time playing process, the user may have changed emotion, and have a new playing requirement, and in order to ensure the playing efficiency and effectively adjust emotion, the playing time may be controlled by presetting the playing duration, so as to bring better experience to the user. Therefore, in other preferred embodiments, referring to fig. 4, the method may further include:
and step S8, judging whether the playing time of the playing information corresponding to the identified expression reaches the preset time, if so, executing the step S9, otherwise, continuing playing until the preset time is reached.
And S9, stopping playing, and continuing to execute the step S1 of acquiring the image of the area to be detected.
For example, the preset time may be 5 minutes, and when the playing information corresponding to the identified expression is played for 5 minutes, the playing is stopped, and the face detection process is restarted.
In practical application, the remaining playing time of the playing information corresponding to the identified expression can be determined according to the preset time length, if the remaining playing time reaches the preset value, the step of acquiring the image of the area to be detected is executed, the face detection process is started in advance, otherwise, the playing is continued until the remaining playing time reaches the preset value. And continuing to play the playing information corresponding to the currently identified expression for a preset time, and directly playing the playing information corresponding to the newly identified expression, so that the playing content is in smooth transition, and a better playing experience is brought to the user, wherein the preset value can be determined according to the time required for executing the steps S1 to S5. For example, if the time required for executing steps S1 to S5 is 5 seconds, the currently recognized expression is a sad expression, the preset duration is 4 minutes, and the corresponding playing information is cured music, when the music is played for 3 minutes and 55 seconds, the step of obtaining the image of the area to be detected is executed, the face detection process is started in advance, a new expression is recognized, the corresponding playing information is determined, and when the music is played for 4 minutes, that is, the preset duration is reached, the playing information corresponding to the new expression is directly played.
According to the embodiment, in the process of playing the playing information corresponding to the recognized expression, a certain condition is set to restart the face detection process, the latest expression of the user is recognized, the playing operation can be updated along with the requirement of the user, the emotion of the user can be adjusted, and the experience quality of the user is improved.
EXAMPLE III
The present embodiment provides an information playing method, and a specific flow of step S5 in the first embodiment is shown in fig. 5, and may include the following sub-steps:
and step S51, determining playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
And step S52, searching playing information corresponding to the identified expression from a preset Internet search platform.
Step S53, the searched playback information is played back.
Specifically, the preset internet search platform may be set by a user or may be set by default, and the internet search platform may include but is not limited to: music search websites, music APPs, picture search engines, and so forth. The method comprises the steps that an internet search platform is preset as a specified search mode, after play information corresponding to an identified expression is determined, the preset internet search platform is automatically jumped to search for the play information, the searched play information is played through multimedia equipment which is carried by a household appliance or is externally connected, such as a display device and an audio play device, for example, the preset internet search platform is a certain music APP, the identified expression is angry, the corresponding play information is laugh music, at the moment, the music APP is automatically jumped to, the laugh music is searched and played. It is understood that more than one searched jumbled music may be provided, for example, the searched jumbled music is displayed in a list, and in this case, the music may be played according to a default sorting or a preset sorting of the list, or the music in the list may be randomly played, where the default sorting may be sorting according to the number of times of playing, and the preset sorting may be sorting according to the degree of search heat.
As shown in fig. 6, in another preferred embodiment, the specific flow of step S5 in the first embodiment may further include the following sub-steps:
step S51, determining playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
Step S54, searching for playing information corresponding to the identified emotions from the local pre-stored information.
Step S53, the searched playback information is played back.
Specifically, the local pre-stored information may be playing information pre-stored in a memory of the household appliance, for example, the playing information therein may include at least one of audio and image. It is understood that the audio may be a specific piece of music, and the image may be a specific piece of photo, which is not limited in this embodiment.
Taking the playing information as music as an example, the preset corresponding relationship between the identified expression and the playing information may include, but is not limited to, the following:
the playing information corresponding to the sad expression is the cured music A;
the playing information corresponding to the angry expression is laugh music B;
the playing information corresponding to the happy expression is the relaxed music C.
Taking the playing information as an image, the preset corresponding relationship between the expression and the playing information may include, but is not limited to, the following:
the playing information corresponding to the sad expression is an impairment image D;
the playing information corresponding to the angry expression is a laugh image E;
the playing information corresponding to the happy expression is a landscape image F.
It is understood that the image in this embodiment may be a static image or a dynamic image, and is not limited to the specific example.
After the playing information corresponding to the identified expression is searched from the local pre-stored information of the household appliance, the searched playing information can be played through the multimedia device carried by the household appliance or externally connected with the household appliance, for example, the identified expression is angry, and the corresponding playing information is laughted music B, and at this time, the laughted music B is automatically searched from the pre-stored information in the memory and played. On one hand, the playing information stored locally does not need to depend on the Internet, so that the playing effect is not influenced by the network speed, and smooth playing can be realized. On the other hand, the playing information stored locally can be stored by the user, so that the playing information is more in line with the preference of the user, the emotional adjusting effect can be better brought to the user, in addition, the playing information is searched from the local storage, the searching accuracy can also be improved, and the possibility that the searching result is uncertain brought by internet searching is avoided.
As shown in fig. 7, in another preferred embodiment, the specific flow of step S5 in the first embodiment may further include the following sub-steps:
and step S51, determining playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
Step S54, searching for playing information corresponding to the identified emotions from the local pre-stored information.
Step S55, judging whether playing information corresponding to the identified expression is searched from the local prestored information; if yes, go to step S53; otherwise, step S56 is executed.
And step S56, searching playing information corresponding to the identified expression from a preset Internet search platform.
Step S53, the searched playback information is played back.
Specifically, the user can preferentially perform local search by setting a search priority, that is: after the playing information corresponding to the identified expression is determined, preferentially searching the playing information corresponding to the identified expression from the local pre-stored information, for example, the identified expression is angry, and the corresponding playing information is laugh music B, at this time, automatically searching the laugh music B from the pre-stored information in the memory, if the laugh music B is searched, directly playing, and if the laugh music B is not searched, automatically jumping to a preset Internet searching platform for searching. On one hand, the playing effect is not influenced by the network speed because the playing information stored locally does not need to depend on the Internet, and the searching efficiency can also be improved; on the other hand, the playing information stored locally can be stored by the user, so that the playing information is more in line with the preference of the user and can better bring emotional regulation effect to the user. In some special cases, the local storage may have situations such as loss or mistaken deletion of the playing information, so that the playing information corresponding to the identified expression cannot be searched from the local pre-stored information, and at this time, the playing information corresponding to the identified expression can be searched from the preset internet search platform.
Example four
Referring to fig. 8, in accordance with an embodiment, the present embodiment provides an information playing system, including:
the image acquisition module 1 is used for acquiring an image of a region to be detected.
And the face detection module 2 is used for carrying out face detection on the image of the area to be detected.
The judging module 3 is used for judging whether a human face is detected in the image of the region to be detected;
and the expression recognition module 4 is used for recognizing expressions when a face is detected in the image of the area to be detected.
And the information playing module 5 is used for playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relationship between the expression and the playing information.
It is understood that the image obtaining module 1 may be configured to perform the step S1, the face detecting module 2 may be configured to perform the step S2, the determining module 3 may be configured to perform the steps S3 to S4, and the information playing module 4 may be configured to perform the step S5. For the details of the specific steps, please refer to the details of the first embodiment, which is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or they may be separately fabricated into various integrated circuit modules, or multiple modules or steps thereof may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
EXAMPLE five
The embodiment provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to realize the information playing method provided by any one of the first to third embodiments.
The Processor in this embodiment may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable logic Device (Programmable L analog Device, P L D), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and is configured to execute the method in the above embodiments.
EXAMPLE six
The present embodiment provides a home appliance including: the electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the computer program realizes the information playing method provided by any one of the first embodiment to the third embodiment when being executed by the processor.
In this embodiment, the household appliance may be provided with an image acquisition device to acquire an image of an area to be detected, and the image acquisition device may be, but is not limited to, a camera. The household appliance in this embodiment is provided with a multimedia device, and the multimedia device may include, but is not limited to, a display device and an audio playing device, where the display device may be a display screen, and the audio playing device may be a sound box. A processor in the electronic equipment is connected with the image acquisition device and can acquire the image of the area to be detected acquired by the image acquisition device.
Preferably, the memory is connected with the internet, and the searching of the playing information from the internet can be realized. The household appliance can be, but is not limited to an air conditioner, an image acquisition device is installed on the air conditioner, the air conditioner is provided with multimedia equipment from the area or is externally connected with the multimedia equipment, the image acquisition device installed on the air conditioner acquires an area image to be detected, a processor acquires the area image to be detected, face detection is carried out on the area image to be detected, whether a face is detected in the area image to be detected is judged, when the face is detected in the area image to be detected, expression recognition is carried out, according to recognized expressions and preset corresponding relations between the expressions and playing information, playing information corresponding to the recognized expressions is played by the multimedia equipment from the area or externally connected with the air conditioner.
According to the embodiment, the user experience requirements can be sensed in the process that the user uses the household appliance (such as an air conditioner), the playing information corresponding to the recognized expression is played according to the recognized expression and the preset corresponding relation between the expression and the playing information, the user emotion is favorably adjusted, and better household appliance use experience is brought.
EXAMPLE seven
The present embodiment provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by one or more processors, the information playing method provided in any one of the first to third embodiments is implemented.
The computer-readable storage medium in this embodiment may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In summary, the information playing method, the information playing system, the electronic device, the household appliance and the storage medium provided by the invention acquire the image of the area to be detected, and perform face detection on the image of the area to be detected; when a face is detected in the image of the area to be detected, performing expression recognition; playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information, and playing the corresponding information according to the requirements of the user by sensing the requirements of the user, so that better user experience is brought.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. The system and method embodiments described above are merely illustrative.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Although the embodiments of the present invention have been described above, the above descriptions are only for the convenience of understanding the present invention, and are not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (13)

1. An information playing method, comprising:
acquiring an image of a region to be detected;
carrying out face detection on the image of the area to be detected;
when a face is detected in the image of the area to be detected, performing expression recognition;
and playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
2. The information playing method according to claim 1, wherein the method further comprises:
and when the playing of the playing information corresponding to the recognized expression is finished, executing the step of acquiring the image of the area to be detected.
3. The information playing method according to claim 1, wherein the method further comprises:
and when the playing time of the playing information corresponding to the identified expression reaches the preset time, stopping playing, and executing the step of acquiring the image of the area to be detected.
4. The information playing method according to claim 1, wherein playing the playing information corresponding to the identified expression according to the identified expression and a preset correspondence between the expression and the playing information comprises:
determining playing information corresponding to the identified expression according to the identified expression and a preset corresponding relation between the expression and the playing information;
searching playing information corresponding to the identified expression from a preset internet searching platform;
and playing the searched playing information.
5. The information playing method according to claim 1, wherein playing the playing information corresponding to the identified expression according to the identified expression and a preset correspondence between the expression and the playing information comprises:
determining playing information corresponding to the identified expression according to the identified expression and a preset corresponding relation between the expression and the playing information;
searching playing information corresponding to the identified expression from local pre-stored information;
and playing the searched playing information.
6. The information playing method according to claim 1, wherein playing the playing information corresponding to the identified expression according to the identified expression and a preset correspondence between the expression and the playing information comprises:
determining playing information corresponding to the identified expression according to the identified expression and a preset corresponding relation between the expression and the playing information;
searching playing information corresponding to the identified expression from local pre-stored information;
if the playing information corresponding to the identified expression is not searched from the local pre-stored information, searching the playing information corresponding to the identified expression from a preset internet searching platform;
and playing the searched playing information.
7. The information playing method according to claim 1, wherein the performing of the face detection on the image of the region to be detected comprises:
and carrying out face detection on the image of the region to be detected by using an Adaboost cascade classifier based on Haar characteristics.
8. The information playback method according to any one of claims 1 to 7, wherein the playback information includes at least one of audio and image.
9. An information playback system, comprising:
the image acquisition module is used for acquiring an image of a region to be detected;
the face detection module is used for carrying out face detection on the image of the area to be detected;
the expression recognition module is used for recognizing the expression when the face is recognized in the image of the area to be detected;
and the information playing module is used for playing the playing information corresponding to the identified expression according to the identified expression and the preset corresponding relation between the expression and the playing information.
10. An electronic device, comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, implements the information playing method according to any one of claims 1 to 8.
11. A domestic appliance comprising an electronic device according to claim 10.
12. The household appliance according to claim 11, wherein the household appliance is an air conditioner.
13. A storage medium having stored thereon a computer program which, when executed by one or more processors, implements the information playback method according to any one of claims 1 to 8.
CN202010250873.3A 2020-04-01 2020-04-01 Information playing method and system, electronic equipment, household appliance and storage medium Withdrawn CN111476140A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010250873.3A CN111476140A (en) 2020-04-01 2020-04-01 Information playing method and system, electronic equipment, household appliance and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010250873.3A CN111476140A (en) 2020-04-01 2020-04-01 Information playing method and system, electronic equipment, household appliance and storage medium

Publications (1)

Publication Number Publication Date
CN111476140A true CN111476140A (en) 2020-07-31

Family

ID=71750637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010250873.3A Withdrawn CN111476140A (en) 2020-04-01 2020-04-01 Information playing method and system, electronic equipment, household appliance and storage medium

Country Status (1)

Country Link
CN (1) CN111476140A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114999534A (en) * 2022-06-10 2022-09-02 中国第一汽车股份有限公司 Method, device and equipment for controlling playing of vehicle-mounted music and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467668A (en) * 2010-11-16 2012-05-23 鸿富锦精密工业(深圳)有限公司 Emotion detecting and soothing system and method
CN106383447A (en) * 2016-10-29 2017-02-08 深圳智乐信息科技有限公司 Method and system for adjusting smart home automatically
CN109093627A (en) * 2017-06-21 2018-12-28 富泰华工业(深圳)有限公司 intelligent robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467668A (en) * 2010-11-16 2012-05-23 鸿富锦精密工业(深圳)有限公司 Emotion detecting and soothing system and method
CN106383447A (en) * 2016-10-29 2017-02-08 深圳智乐信息科技有限公司 Method and system for adjusting smart home automatically
CN109093627A (en) * 2017-06-21 2018-12-28 富泰华工业(深圳)有限公司 intelligent robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114999534A (en) * 2022-06-10 2022-09-02 中国第一汽车股份有限公司 Method, device and equipment for controlling playing of vehicle-mounted music and storage medium

Similar Documents

Publication Publication Date Title
CN110119711B (en) Method and device for acquiring character segments of video data and electronic equipment
KR102416558B1 (en) Video data processing method, device and readable storage medium
US10070050B2 (en) Device, system and method for cognitive image capture
US10474903B2 (en) Video segmentation using predictive models trained to provide aesthetic scores
JP6699916B2 (en) System and method for user behavior based content recommendation
TWI621470B (en) Rapid recognition method and intelligent domestic robot
US10380208B1 (en) Methods and systems for providing context-based recommendations
CN112312215B (en) Startup content recommendation method based on user identification, smart television and storage medium
US11457061B2 (en) Creating a cinematic storytelling experience using network-addressable devices
CN113824972B (en) Live video processing method, device, equipment and computer readable storage medium
CN111767814A (en) Video determination method and device
US20140241592A1 (en) Systems and Methods for Automatic Image Editing
CN112000024B (en) Method, device and equipment for controlling household appliance
CN111476140A (en) Information playing method and system, electronic equipment, household appliance and storage medium
CN114339076A (en) Video shooting method and device, electronic equipment and storage medium
CN113450804A (en) Voice visualization method and device, projection equipment and computer readable storage medium
CN108882024B (en) Video playing method and device and electronic equipment
US20230066331A1 (en) Method and system for automatically capturing and processing an image of a user
CN115866339A (en) Television program recommendation method and device, intelligent device and readable storage medium
CN111915637A (en) Picture display method and device, electronic equipment and storage medium
CN111414883A (en) Program recommendation method, terminal and storage medium based on face emotion
CN111209501B (en) Picture display method and device, electronic equipment and storage medium
CN114143610B (en) Shooting guiding method, shooting guiding device, server and electronic equipment
CN114863519A (en) Control method of mobile terminal, mobile terminal and storage medium
CN114925233A (en) Video recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200731

WW01 Invention patent application withdrawn after publication