CN113569634A - Scene feature control method and device, storage medium and electronic device - Google Patents
Scene feature control method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN113569634A CN113569634A CN202110681521.8A CN202110681521A CN113569634A CN 113569634 A CN113569634 A CN 113569634A CN 202110681521 A CN202110681521 A CN 202110681521A CN 113569634 A CN113569634 A CN 113569634A
- Authority
- CN
- China
- Prior art keywords
- target
- scene
- target object
- color
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000008451 emotion Effects 0.000 claims description 21
- 230000002996 emotional effect Effects 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 12
- 230000008569 process Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 239000003086 colorant Substances 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000037308 hair color Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000036651 mood Effects 0.000 description 3
- 238000010195 expression analysis Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003472 neutralizing effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 244000144977 poultry Species 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a method and a device for controlling scene characteristics, a storage medium and an electronic device, wherein the method comprises the following steps: detecting that a target object enters a target scene; extracting the characteristics of the target object to obtain the characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics. By adopting the technical scheme, the problems of low feature matching degree between the scene and the object in the scene and the like in the related technology are solved.
Description
Technical Field
The present invention relates to the field of communications, and in particular, to a method and an apparatus for controlling scene characteristics, a storage medium, and an electronic apparatus.
Background
With the improvement of science and technology, the improvement of the life quality of people has more experience demands on home decoration. In the current technology, after a user enters a certain scene, the scene cannot flexibly cooperate with the features of the user to change the features of the scene, so that the matching degree between the scene and the user is low. The higher scene interaction experience of the user cannot be met.
Aiming at the problems of low feature matching degree of scenes and objects in the scenes and the like in the related technology, an effective solution is not provided.
Disclosure of Invention
The embodiment of the invention provides a method and a device for controlling scene characteristics, a storage medium and an electronic device, which are used for at least solving the problems of low matching degree of the characteristics of a scene and an object in the scene in the related art.
According to an embodiment of the present invention, there is provided a method for controlling a scene characteristic, including: detecting that a target object enters a target scene; extracting the features of the target object to obtain the features of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
In one exemplary embodiment, controlling the target scene to exhibit a target scene feature corresponding to the target object feature according to the target object feature includes: determining corresponding scene features as the target scene features according to the color features included in the target object features; and controlling the target scene to show the target scene characteristics.
In an exemplary embodiment, the performing feature extraction on the target object to obtain the target object feature includes: acquiring a color signal emitted by the target object, wherein the color signal comprises at least one of the following: emotional and appearance signals; and extracting the color feature from the color signal as the target object feature.
In an exemplary embodiment, determining a corresponding scene feature as the target scene feature according to the color feature included in the target object feature includes: inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features labeled with color labels of the target object, and the historical object features are extracted from historical network data of the target object; and acquiring the target scene characteristics output by the first recognition model.
In an exemplary embodiment, determining a corresponding scene feature as the target scene feature according to the color feature included in the target object feature includes: inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with a color label of the target object, and the historical emotion characteristics are extracted from historical network data of the target object; acquiring a target color label output by the second recognition model; and carrying out color fusion on the target color label and the target appearance characteristics included by the color characteristics to obtain the target scene characteristics.
In one exemplary embodiment, controlling the object scene to exhibit the object scene feature includes: controlling the target scene to display the target color included in the target scene characteristics to obtain a target color scene; and displaying a target image included by the target scene characteristic on the target color scene.
In one exemplary embodiment, detecting the entry of the target object into the target scene includes: detecting entry of the target object into a target environment; identifying an object identifier corresponding to the target object; and determining a region corresponding to the object identifier in the target environment as the target scene.
According to another embodiment of the present invention, there is also provided a scene feature control apparatus including: the detection module is used for detecting that a target object enters a target scene; the extraction module is used for extracting the features of the target object to obtain the features of the target object; and the control module is used for controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the control method of the scene feature described above when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the method for controlling the scene characteristics through the computer program.
In the embodiment of the invention, a target object is detected to enter a target scene; extracting the characteristics of the target object to obtain the characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics, namely detecting an object entering the target scene in the target scene, if the target object is detected, extracting the target object characteristics of the target object, and controlling the target scene to display the corresponding target scene characteristics according to the target object characteristics so that the target scene is matched with the characteristics of the target object entering the target scene. By adopting the technical scheme, the problems of low feature matching degree of the scene and the objects in the scene and the like in the related technology are solved, and the technical effect of improving the feature matching degree of the scene and the objects in the scene is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal of a method for controlling scene characteristics according to an embodiment of the present invention;
fig. 2 is a flowchart of a control method of scene characteristics according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a control method of scene characteristics according to an embodiment of the present invention;
fig. 4 is a block diagram of a control apparatus for scene characteristics according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method provided by the embodiment of the application can be executed in a computer terminal, a computer terminal or a similar operation device. Taking the operation on a computer terminal as an example, fig. 1 is a hardware structure block diagram of a computer terminal of a method for controlling scene characteristics according to an embodiment of the present invention. As shown in fig. 1, the computer terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and in an exemplary embodiment, may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the computer terminal. For example, the computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration with equivalent functionality to that shown in FIG. 1 or with more functionality than that shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of an application software and a module, such as a computer program corresponding to the control method of the scene feature in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to a computer terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a method for controlling scene characteristics is provided, which is applied to the computer terminal described above, and fig. 2 is a flowchart of a method for controlling scene characteristics according to an embodiment of the present invention, where the flowchart includes the following steps:
step S202, detecting that a target object enters a target scene;
step S204, extracting the features of the target object to obtain the features of the target object;
step S206, controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
Through the steps, the target object is detected to enter the target scene; extracting the characteristics of the target object to obtain the characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics, namely detecting an object entering the target scene in the target scene, if the target object is detected, extracting the target object characteristics of the target object, and controlling the target scene to display the corresponding target scene characteristics according to the target object characteristics so that the target scene is matched with the characteristics of the target object entering the target scene. By adopting the technical scheme, the problems of low feature matching degree of the scene and the objects in the scene and the like in the related technology are solved, and the technical effect of improving the feature matching degree of the scene and the objects in the scene is realized.
In the technical solution provided in step S202, the target scenario may include, but is not limited to, any type of scenario that allows intelligent control, such as: home, garage, office, classroom, teaching building, laboratory, pasture, and the like.
Optionally, in this embodiment, the target object may include, but is not limited to: humans (user or user-designated person), animals (pets, poultry, livestock), and the like. Scene features may be controlled, but are not limited to, by priority of objects entering the target scene, such as: the method comprises the steps that a user A and a user B both enter a target scene, if the priority of the user A is higher than that of the user B, the user A is determined as a target object, the characteristics of the target scene are controlled to be matched with the characteristics of the user A for displaying, and if the priority of the user B is higher than that of the user A, the user B is determined as a target object, and the characteristics of the target scene are controlled to be matched with the characteristics of the user B for displaying.
Optionally, in this embodiment, the scene characteristics may also be controlled according to the chronological order of the objects entering the target scene, such as: the user A enters a target scene first, and the characteristics of the target scene are controlled to be matched with the characteristics of the user A for displaying. And then the user B enters the target scene, and the characteristics of the target scene are still matched with the characteristics of the user A for displaying.
Optionally, in this embodiment, the priority of the process of determining the target object according to the priority may be, but is not limited to, higher than the priority of the process of determining the target object according to the order of entering the target scene, for example: the user A enters a target scene first, and the characteristics of the target scene are controlled to be matched with the characteristics of the user A for displaying. And then the user B enters a target scene, if the priority of the user B is higher than that of the user A, the user B is determined as a target object, and the characteristics of the target scene are controlled to be matched with the characteristics of the user B for displaying. And if the priority of the user A is higher than that of the user B, the characteristics of the target scene are still matched with the characteristics of the user A for displaying.
Optionally, in this embodiment, in the target scene, the target object entering the target scene may be detected by, but is not limited to, the following manners: detecting that an object enters the target scene; identifying biological characteristics of an object entering the target scene to obtain an object identifier; matching the object identification with identifications recorded in an identification list, wherein the identification list records identifications of objects bound with the target scene; determining that the target object is detected in the target scene if the object identification is successfully matched with an identification recorded in an identification list.
Optionally, in this embodiment, the object entering the target scene may be detected by, but not limited to, biometric recognition, such as: facial recognition, iris recognition, fingerprint recognition, voice print recognition, and the like.
Optionally, in this embodiment, the user may configure an object bound to the target scene in advance for the target scene, and store the target scene having the correspondence and the identifier of the bound object for detection of the target object. Such as: the user can but not limited to bind the objects such as family, friends, pets, etc. to the target scene, and can also but not limited to set the priority of the objects bound to the target scene, such as: family's priority is higher than friend's priority, and friend's priority is higher than pet's priority.
In one exemplary embodiment, entry of a target object into a target scene may be detected, but is not limited to, by: detecting entry of the target object into a target environment; identifying an object identifier corresponding to the target object; and determining a region corresponding to the object identifier in the target environment as the target scene.
Optionally, in this embodiment, the target scene may be, but is not limited to, a part of the area in the target environment, such as: different houses in a building may be, but are not limited to being, in different scenes, different rooms in a house may be, but is not limited to being, in different scenes, and different areas in a room may be, but is not limited to being, in different scenes.
Optionally, in this embodiment, each scene may correspond to, but is not limited to, one or more objects, and the scene characteristics of the scene may be affected by the one or more objects entering the scene. Such as: a family comprises three families, a plurality of scenes are divided in the environment of the house of the family, the scenes comprise a living room, a kitchen, a washroom, a master bedroom in which a dad mother lives and a sub bedroom in which a child lives, the living room and the washroom can correspond to all three persons, the kitchen corresponds to the mother, the master bedroom corresponds to the dad and the mother, and the sub bedroom corresponds to the child. And if the fact that the mother returns home is detected, controlling the scene characteristics of the living room, the kitchen, the washroom and the master bedroom to be in accordance with the current object characteristics of the mother. And if the child is detected to go home, controlling the scene characteristics of the living room, the washroom and the secondary bedroom to be in accordance with the object characteristics of the child.
In the technical solution provided in step S204, the manner of extracting the features of the target object may include, but is not limited to, a feature extraction algorithm, AI model identification, and the like.
Optionally, in this embodiment, the target object feature of the target object may include, but is not limited to, features of one or more dimensions, such as: personality characteristics, emotional characteristics, appearance characteristics (e.g., facial complexion, skin tone, hair style, hair color, clothing brand, etc.), physiological index characteristics (e.g., heart rate, pressure values, blood oxygen saturation, etc.).
Optionally, in this embodiment, the feature extraction on the target object may include, but is not limited to, extracting features from historical network data of the target object and extracting features from a state when the target object currently enters the target scene. Different types of features can be extracted by different feature extraction modes, and the same type of features can also be extracted.
In an exemplary embodiment, the target object may be subjected to feature extraction in the following manner, but is not limited to, to obtain the target object features: acquiring a color signal emitted by the target object, wherein the color signal comprises at least one of the following: emotional and appearance signals; and extracting the color feature from the color signal as the target object feature.
Optionally, in this embodiment, the color signal emitted by the target object is derived from at least one of the following: emotional signals and appearance signals. Among them, emotional signals may include, but are not limited to: speech and expression of the target object, etc. The appearance signal may include, but is not limited to: a long phase of the target object, a skin color, a hair style, a hair color, a clothing brand, and the like.
Optionally, in this embodiment, the target scene may be controlled to exhibit scene features corresponding thereto, but not limited to, depending on the mood and the color of the appearance of the target object when entering the target scene.
Optionally, in this embodiment, the target emotional characteristic of the target object may be, but is not limited to, used for characterizing the current emotion of the target object, such as: happy, sad, excited, angry, depressed, etc.
Optionally, in this embodiment, the appearance characteristic of the target object may be, but is not limited to, a color tone used for characterizing the current appearance of the target object, such as: clothing tone, hair color, skin color, and the like.
Optionally, in this embodiment, the target emotional feature may be, but is not limited to, extracted from a voice signal uttered by the target object and an expression signal expressed by the face of the target object, and the target color feature may be, but is not limited to, extracted from a clothing color of the target object.
Optionally, in this embodiment, when it is detected that the target object enters the target scene, a voice signal sent by the target object is collected, the voice signal is subjected to voice analysis to obtain an emotional feature carried in the voice signal, an expression signal exhibited by the target object is collected, facial expression analysis is performed on the expression signal to obtain an emotional feature carried in the expression signal, and feature fusion is performed on the emotional feature carried in the voice signal and the emotional feature carried in the expression signal, so as to obtain the target emotional feature.
Optionally, in this embodiment, when it is detected that the target object enters the target scene, the clothing color of the target object is identified, for example: the colors of clothes, trousers, skirts, hats, shoes, gloves, socks, accessories and the like are identified as target color signals, and target color features are extracted from the target color signals.
In the technical solution provided in step S206, the AI model may be used to perform automatic identification and matching from the target object feature to the target scene feature.
Optionally, in this embodiment, the target scene characteristics may include, but are not limited to, colors exhibited in the scene, lights in the scene, images exhibited in the scene, and the like.
In an exemplary embodiment, the target scene may be controlled to exhibit target scene features corresponding to the target object features according to the target object features by, but not limited to: determining corresponding scene features as the target scene features according to the color features included in the target object features; and controlling the target scene to show the target scene characteristics.
Optionally, in this embodiment, the target object characteristics may include, but are not limited to, color characteristics, which may be derived from, but are not limited to, mood and appearance of the target object.
Optionally, in this embodiment, the target scene characteristics to be displayed by the target scene correspond to the color characteristics of the target object, and the corresponding manner may be compliant or opposite. Such as: if the color characteristic of the target object is a comparatively gloomy hue, a comparatively fresh hue can be matched as the target scene characteristic, or a comparatively sunny hue can be matched as the target scene characteristic, so that the mood of the target object in gloomy can be relieved. The color characteristic of the target object is the tone of the comparative sunlight, and then the tone of the comparative sunlight can be matched as the target scene characteristic so as to cater to the hot emotion of the target object, or the tone of the comparative cool and quiet tone can be matched as the target scene characteristic so as to play a certain role in neutralizing the hot emotion of the target object.
In an exemplary embodiment, according to the color feature included in the target object feature, the corresponding scene feature may be determined as the target scene feature by, but not limited to, one of the following manners:
mode one, may include but is not limited to the following steps:
step 11, inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features labeled with color labels of the target object, and the historical object features are extracted from historical network data of the target object;
and step 12, acquiring the target scene characteristics output by the first recognition model.
Optionally, in this embodiment, the first recognition model is obtained by training the initial recognition model using a historical object feature labeled with a color label of the target object, and the historical object feature may be, but is not limited to, extracted from historical network data of the target object. The historical network data may include, but is not limited to, shopping data, transaction data, chat data, social data, and the like for the target object.
Optionally, in this embodiment, the first recognition model may be trained, but is not limited to, through the following training process: inputting the historical object features into an initial recognition model to obtain initial scene features output by the initial recognition model, adjusting model parameters of the initial recognition model according to differences between the initial scene features and color labels marked by the historical object features until the differences between the color labels marked by the initial scene features and the historical object features meet preset conditions, and determining the model parameters which enable the differences between the color labels marked by the initial scene features and the historical object features to meet the preset conditions as the model parameters used by the first recognition model to obtain the first recognition model. Or adjusting model parameters of the initial recognition model according to the difference between the color labels marked by the initial scene features and the historical object features, stopping the adjustment of the model parameters until the adjustment times reach preset adjustment times, prompting events for stopping model training, and training the model after a technician adjusts the model (such as adjusting the hyper-parameters).
Optionally, in this embodiment, in the first manner, the color feature of the target object is input into the first recognition model, and data output by the first recognition model is directly acquired as the target scene feature.
Mode two, may include, but is not limited to, the following steps:
step 21, inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with a color label of the target object, and the historical emotion characteristics are extracted from historical network data of the target object;
step 22, obtaining a target color label output by the second recognition model;
and step 23, performing color fusion on the target color label and the target appearance characteristics included in the color characteristics to obtain the target scene characteristics.
Optionally, in this embodiment, the second recognition model is obtained by training the initial recognition model using a historical emotional feature of the color label labeled with the target object, and the historical emotional feature is extracted from the historical network data of the target object. The historical network data may include, but is not limited to, shopping data, transaction data, chat data, social data, and the like for the target object.
Optionally, in this embodiment, the second recognition model may be trained, but is not limited to, through the following training process: inputting the historical object characteristics into an initial identification model to obtain an initial color label output by the initial identification model, adjusting model parameters of the initial identification model according to the difference between the initial color label and the color label marked by the historical emotion characteristics until the difference between the initial color label and the color label marked by the historical emotion characteristics meets a preset condition, and determining the model parameters which enable the difference between the initial color label and the color label marked by the historical emotion characteristics to meet the preset condition as the model parameters used by a second identification model so as to obtain the second identification model. Or adjusting model parameters of the initial recognition model according to the difference between the initial color label and the color label marked by the historical emotional characteristic until the adjustment times reach the preset adjustment times, stopping the adjustment of the model parameters, prompting an event for stopping the model training, and performing the model training after a technician adjusts the model (such as the adjustment of the hyper-parameter).
Optionally, in this embodiment, in the second manner, the target emotional characteristic of the target object is input into the second recognition model, and data output by the second recognition model is acquired as the target color label. And carrying out color fusion on the target color label and the target color feature included by the target object feature so as to obtain the target scene feature.
Optionally, in this embodiment, the types of the initial recognition model used for training the first recognition model and the initial recognition model used for training the second recognition model may be the same or different, and the model structures of the two models may be the same or different.
In the technical solution provided in step S208 above, the positions for showing the target scene features in the target scene may include, but are not limited to: the wall of the target scene, the ceiling, the electronic equipment in the target scene, household equipment, furniture, household appliances, and the like.
Optionally, in this embodiment, the manner of implementing the feature display of the target scene in the target scene may include, but is not limited to, a holographic projection technology, a material technology, a light control technology, and the like.
In an exemplary embodiment, the target scene may be controlled to exhibit the target scene characteristics in, but not limited to, the following ways: controlling the target scene to display the target color included in the target scene characteristics to obtain a target color scene; and displaying a target image included by the target scene characteristic on the target color scene.
Optionally, in the present embodiment, the target scene features may include, but are not limited to, a target color for changing a color of a scene, and a target image for displaying a scene pattern. The scene color may be first converted to a target color and then the target image may be displayed on the target color.
In order to better understand the process of the control method of the scene characteristics, the following describes a flow of an implementation method of the control method of the scene characteristics with reference to an optional embodiment, but the flow is not limited to the technical solution of the embodiment of the present invention.
In this embodiment, a method for controlling a scene characteristic is provided, and fig. 3 is a schematic diagram of a method for controlling a scene characteristic according to an embodiment of the present invention, as shown in fig. 3, the following steps are specifically performed:
step S301: carrying out voice analysis on the collected user voice through intelligent voice recognition, and recognizing current tone, current language emotion and the like as part of emotion characteristics;
step S302: the method comprises the steps of carrying out dress color identification on a shot user picture through image analysis, and identifying the current dress color of a user as an appearance characteristic;
step S303: identifying a facial picture of a user through facial expression analysis, and identifying the current facial emotion of the user as another part of emotional characteristics;
step S304: various characteristics are output in a customized manner into a set of colors and images through AI customized analysis;
step S305: the display of colors and images in a target scene is realized through a holographic projection technology or a surface material technology, so that the skin changing of household appliances, homes and the like is realized.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Fig. 4 is a block diagram of a control apparatus for scene characteristics according to an embodiment of the present invention; as shown in fig. 4, includes:
a detection module 42, configured to detect that a target object enters a target scene;
the extraction module 44 is configured to perform feature extraction on the target object to obtain a target object feature;
and a control module 46, configured to control the target scene to display a target scene feature corresponding to the target object feature according to the target object feature.
Through the embodiment, the target object is detected to enter the target scene; extracting the characteristics of the target object to obtain the characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics, namely detecting an object entering the target scene in the target scene, if the target object is detected, extracting the target object characteristics of the target object, and controlling the target scene to display the corresponding target scene characteristics according to the target object characteristics so that the target scene is matched with the characteristics of the target object entering the target scene. By adopting the technical scheme, the problems of low feature matching degree of the scene and the objects in the scene and the like in the related technology are solved, and the technical effect of improving the feature matching degree of the scene and the objects in the scene is realized.
In one exemplary embodiment, the control module includes: the first determining unit is used for determining corresponding scene characteristics as the target scene characteristics according to the color characteristics included in the target object characteristics; and the control unit is used for controlling the target scene to display the target scene characteristics.
In an exemplary embodiment, the extraction module includes: the acquisition unit is used for acquiring a color signal emitted by the target object, wherein the color signal comprises at least one of the following components: emotional and appearance signals; an extracting unit configured to extract the color feature from the color signal as the target object feature.
In an exemplary embodiment, the first determining unit is configured to: inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features labeled with color labels of the target object, and the historical object features are extracted from historical network data of the target object; and acquiring the target scene characteristics output by the first recognition model.
In an exemplary embodiment, the first determining unit is configured to: inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with a color label of the target object, and the historical emotion characteristics are extracted from historical network data of the target object; acquiring a target color label output by the second recognition model; and carrying out color fusion on the target color label and the target appearance characteristics included by the color characteristics to obtain the target scene characteristics.
In an exemplary embodiment, the control unit is configured to: controlling the target scene to display the target color included in the target scene characteristics to obtain a target color scene; and displaying a target image included by the target scene characteristic on the target color scene.
In one exemplary embodiment, the detection module includes: the detection unit is used for detecting that the target object enters a target environment; the identification unit is used for identifying an object identifier corresponding to the target object; a second determining unit, configured to determine, as the target scene, a region in the target environment corresponding to the object identifier.
An embodiment of the present invention further provides a storage medium including a stored program, wherein the program executes any one of the methods described above.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, detecting that the target object enters the target scene;
s2, extracting the features of the target object to obtain the features of the target object;
s3, controlling the target scene to show the target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, detecting that the target object enters the target scene;
s2, extracting the features of the target object to obtain the features of the target object;
s3, controlling the target scene to show the target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method for controlling scene characteristics, comprising:
detecting that a target object enters a target scene;
extracting the features of the target object to obtain the features of the target object;
and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
2. The method for controlling the scene characteristics according to claim 1, wherein controlling the target scene to exhibit the target scene characteristics corresponding to the target object characteristics according to the target object characteristics comprises:
determining corresponding scene features as the target scene features according to the color features included in the target object features;
and controlling the target scene to show the target scene characteristics.
3. The method for controlling the scene characteristics according to claim 2, wherein the performing the characteristic extraction on the target object to obtain the target object characteristics includes:
acquiring a color signal emitted by the target object, wherein the color signal comprises at least one of the following: emotional and appearance signals;
and extracting the color feature from the color signal as the target object feature.
4. The method for controlling scene characteristics according to claim 2, wherein determining, as the target scene characteristics, corresponding scene characteristics according to color characteristics included in the target object characteristics includes:
inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features labeled with color labels of the target object, and the historical object features are extracted from historical network data of the target object;
and acquiring the target scene characteristics output by the first recognition model.
5. The method for controlling scene characteristics according to claim 2, wherein determining, as the target scene characteristics, corresponding scene characteristics according to color characteristics included in the target object characteristics includes:
inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with a color label of the target object, and the historical emotion characteristics are extracted from historical network data of the target object;
acquiring a target color label output by the second recognition model;
and carrying out color fusion on the target color label and the target appearance characteristics included by the color characteristics to obtain the target scene characteristics.
6. The method for controlling the scene characteristics according to claim 2, wherein controlling the target scene to exhibit the target scene characteristics comprises:
controlling the target scene to display the target color included in the target scene characteristics to obtain a target color scene;
and displaying a target image included by the target scene characteristic on the target color scene.
7. The method according to any one of claims 1 to 6, wherein detecting entry of the target object into the target scene comprises:
detecting entry of the target object into a target environment;
identifying an object identifier corresponding to the target object;
and determining a region corresponding to the object identifier in the target environment as the target scene.
8. A control apparatus for scene characteristics, comprising:
the detection module is used for detecting that a target object enters a target scene;
the extraction module is used for extracting the features of the target object to obtain the features of the target object; and the control module is used for controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
9. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 7 by means of the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110681521.8A CN113569634B (en) | 2021-06-18 | 2021-06-18 | Scene characteristic control method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110681521.8A CN113569634B (en) | 2021-06-18 | 2021-06-18 | Scene characteristic control method and device, storage medium and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113569634A true CN113569634A (en) | 2021-10-29 |
CN113569634B CN113569634B (en) | 2024-03-26 |
Family
ID=78162323
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110681521.8A Active CN113569634B (en) | 2021-06-18 | 2021-06-18 | Scene characteristic control method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113569634B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018023515A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Gesture and emotion recognition home control system |
CN107797459A (en) * | 2017-09-15 | 2018-03-13 | 珠海格力电器股份有限公司 | Control method and device of terminal equipment, storage medium and processor |
CN107853885A (en) * | 2017-11-08 | 2018-03-30 | 邓鹏� | It is a kind of based on stroke and business traveller's health status, the Intelligent clothes cabinet of mood of going on business |
CN108115695A (en) * | 2016-11-28 | 2018-06-05 | 沈阳新松机器人自动化股份有限公司 | A kind of emotional color expression system and robot |
CN110671795A (en) * | 2019-11-29 | 2020-01-10 | 北方工业大学 | Livable environment system based on artificial intelligence and use method thereof |
US20200105130A1 (en) * | 2017-05-31 | 2020-04-02 | Beijing Didi Infinity Technology And Development Co., Ltd. | Devices and methods for recognizing driving behavior based on movement data |
CN111447124A (en) * | 2020-04-02 | 2020-07-24 | 张瑞华 | Intelligent household control method and intelligent control equipment based on biological feature recognition |
CN111741116A (en) * | 2020-06-28 | 2020-10-02 | 海尔优家智能科技(北京)有限公司 | Emotion interaction method and device, storage medium and electronic device |
CN112764352A (en) * | 2020-12-21 | 2021-05-07 | 深圳创维-Rgb电子有限公司 | Household environment adjusting method and device, server and storage medium |
-
2021
- 2021-06-18 CN CN202110681521.8A patent/CN113569634B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018023515A1 (en) * | 2016-08-04 | 2018-02-08 | 易晓阳 | Gesture and emotion recognition home control system |
CN108115695A (en) * | 2016-11-28 | 2018-06-05 | 沈阳新松机器人自动化股份有限公司 | A kind of emotional color expression system and robot |
US20200105130A1 (en) * | 2017-05-31 | 2020-04-02 | Beijing Didi Infinity Technology And Development Co., Ltd. | Devices and methods for recognizing driving behavior based on movement data |
CN107797459A (en) * | 2017-09-15 | 2018-03-13 | 珠海格力电器股份有限公司 | Control method and device of terminal equipment, storage medium and processor |
CN107853885A (en) * | 2017-11-08 | 2018-03-30 | 邓鹏� | It is a kind of based on stroke and business traveller's health status, the Intelligent clothes cabinet of mood of going on business |
CN110671795A (en) * | 2019-11-29 | 2020-01-10 | 北方工业大学 | Livable environment system based on artificial intelligence and use method thereof |
CN111447124A (en) * | 2020-04-02 | 2020-07-24 | 张瑞华 | Intelligent household control method and intelligent control equipment based on biological feature recognition |
CN111741116A (en) * | 2020-06-28 | 2020-10-02 | 海尔优家智能科技(北京)有限公司 | Emotion interaction method and device, storage medium and electronic device |
CN112764352A (en) * | 2020-12-21 | 2021-05-07 | 深圳创维-Rgb电子有限公司 | Household environment adjusting method and device, server and storage medium |
Non-Patent Citations (3)
Title |
---|
TIAGO M 等: "A Review on Human-Centered IoT-Connected Smart Labels for the Industry 4.0", 《SPECIAL SECTION ON HUMAN-CENTERED SMART SYSTEMS AND TECHNOLOGIES》, pages 25939 - 25957 * |
吴宇: "基于用户场景的智能家居交互设计研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 06, pages 038 - 945 * |
王仲: "基于传感网络的智能家居舒适度测控系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 10, pages 140 - 1132 * |
Also Published As
Publication number | Publication date |
---|---|
CN113569634B (en) | 2024-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104582187B (en) | Based on the record of recognition of face and Expression Recognition and lamp light control system and method | |
CN107272607A (en) | A kind of intelligent home control system and method | |
CN108681390B (en) | Information interaction method and device, storage medium and electronic device | |
CN109241336A (en) | Music recommendation method and device | |
CN109327737A (en) | TV programme suggesting method, terminal, system and storage medium | |
CN111240220B (en) | Equipment control method and device | |
CN110908340A (en) | Smart home control method and device | |
CN109164713B (en) | Intelligent household control method and device | |
CN108279777B (en) | Brain wave control method and related equipment | |
CN117762032B (en) | Intelligent equipment control system and method based on scene adaptation and artificial intelligence | |
CN109343481B (en) | Method and device for controlling device | |
CN111158258A (en) | Environment monitoring method and system | |
CN108592306B (en) | Electric appliance control method and device and air conditioner | |
CN112859634B (en) | Intelligent service method based on intelligent home system and intelligent home system | |
CN113569634B (en) | Scene characteristic control method and device, storage medium and electronic device | |
CN115171153A (en) | Object type determination method and device, storage medium and electronic device | |
US11687049B2 (en) | Information processing apparatus and non-transitory computer readable medium storing program | |
CN105868606A (en) | Intelligent terminal control device and method | |
CN106056063A (en) | Recognition and control system of robot | |
CN115481284A (en) | Cosmetic method and device based on cosmetic box, storage medium and electronic device | |
CN110895937A (en) | Method and device for acquiring voice control signaling | |
CN110824930B (en) | Control method, device and system of household appliance | |
CN114121020A (en) | Target object identity determination method and device, storage medium and electronic device | |
CN113589697A (en) | Control method and device for household appliance and intelligent household appliance | |
CN113537090A (en) | Intelligent household lamp control method and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |