CN113569634B - Scene characteristic control method and device, storage medium and electronic device - Google Patents

Scene characteristic control method and device, storage medium and electronic device Download PDF

Info

Publication number
CN113569634B
CN113569634B CN202110681521.8A CN202110681521A CN113569634B CN 113569634 B CN113569634 B CN 113569634B CN 202110681521 A CN202110681521 A CN 202110681521A CN 113569634 B CN113569634 B CN 113569634B
Authority
CN
China
Prior art keywords
target
scene
target object
color
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110681521.8A
Other languages
Chinese (zh)
Other versions
CN113569634A (en
Inventor
宋波涛
刘建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202110681521.8A priority Critical patent/CN113569634B/en
Publication of CN113569634A publication Critical patent/CN113569634A/en
Application granted granted Critical
Publication of CN113569634B publication Critical patent/CN113569634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a scene characteristic control method and device, a storage medium and an electronic device, wherein the method comprises the following steps: detecting that a target object enters a target scene; extracting characteristics of a target object to obtain characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics. By adopting the technical scheme, the problems of low feature matching degree between the scene and the objects in the scene and the like in the related technology are solved.

Description

Scene characteristic control method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of communications, and in particular, to a method and apparatus for controlling scene features, a storage medium, and an electronic apparatus.
Background
With the progress of science and technology, people's quality of life promotes, has more experience demands to house ornamentation. In the prior art, after a user enters a certain scene, the scene cannot flexibly match with the characteristics of the user to change the characteristics of the scene, so that the matching degree of the scene and the user is low. The higher scene interaction experience of the user cannot be satisfied.
Aiming at the problems of low feature matching degree between a scene and an object in the scene and the like in the related art, no effective solution is proposed yet.
Disclosure of Invention
The embodiment of the invention provides a method and a device for controlling scene characteristics, a storage medium and an electronic device, which are used for at least solving the problems of low matching degree of the characteristics of a scene and objects in the scene and the like in the related technology.
According to an embodiment of the present invention, there is provided a method for controlling scene characteristics, including: detecting that a target object enters a target scene; extracting the characteristics of the target object to obtain the characteristics of the target object; and controlling the target scene to display target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
In an exemplary embodiment, controlling the target scene to exhibit target scene features corresponding to the target object features according to the target object features includes: determining corresponding scene characteristics as the target scene characteristics according to the color characteristics included in the target object characteristics; and controlling the target scene to display the target scene characteristics.
In an exemplary embodiment, feature extraction is performed on the target object to obtain a target object feature, including: collecting color signals emitted by the target object, wherein the color signals comprise at least one of the following: mood signals and appearance signals; the color feature is extracted from the color signal as the target object feature.
In an exemplary embodiment, determining the corresponding scene feature as the target scene feature according to the color feature included in the target object feature includes: inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features marked with the color labels of the target object, and the historical object features are extracted from historical network data of the target object; and acquiring the target scene characteristics output by the first recognition model.
In an exemplary embodiment, determining the corresponding scene feature as the target scene feature according to the color feature included in the target object feature includes: inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with color labels of the target object, and the historical emotion characteristics are extracted from historical network data of the target object; acquiring a target color label output by the second recognition model; and performing color fusion on the target color label and target appearance features included by the color features to obtain the target scene features.
In an exemplary embodiment, controlling the target scene to exhibit the target scene feature includes: controlling the target scene to display target colors included in the target scene characteristics to obtain a target color scene; and displaying the target image included by the target scene characteristics on the target color scene.
In one exemplary embodiment, detecting the entry of the target object into the target scene includes: detecting that the target object enters a target environment; identifying an object identifier corresponding to the target object; and determining an area corresponding to the object identifier in the target environment as the target scene.
According to another embodiment of the present invention, there is also provided a control device for scene characteristics, including: the detection module is used for detecting that the target object enters the target scene; the extraction module is used for extracting the characteristics of the target object to obtain the characteristics of the target object; and the control module is used for controlling the target scene to display target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
According to a further aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is arranged to execute the above-described method of controlling scene features when run.
According to still another aspect of the embodiments of the present invention, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above-mentioned method for controlling scene features through the computer program.
In the embodiment of the invention, the target object is detected to enter a target scene; extracting characteristics of a target object to obtain characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics, namely detecting the object entering the target scene in the target scene, extracting the target object characteristics of the target object if the target object is detected, and controlling the target scene to display the corresponding target scene characteristics according to the target object characteristics so that the target scene is matched with the characteristics of the target object entering the target scene. By adopting the technical scheme, the problems of low feature matching degree of the scene and the objects in the scene and the like in the related technology are solved, and the technical effect of improving the feature matching degree of the scene and the objects in the scene is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a hardware block diagram of a computer terminal of a scene feature control method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of controlling scene features according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a method of controlling scene features according to an embodiment of the invention;
fig. 4 is a block diagram of a scene feature control device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method embodiments provided by the embodiments of the present application may be performed in a computer terminal, or a similar computing device. Taking the example of running on a computer terminal, fig. 1 is a block diagram of a hardware structure of a computer terminal of a method for controlling scene features according to an embodiment of the present invention. As shown in fig. 1, the computer terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and in one exemplary embodiment, may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the computer terminal described above. For example, a computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than the equivalent functions shown in FIG. 1 or more than the functions shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for controlling scene features in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the computer terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of a computer terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In this embodiment, a method for controlling a scene feature is provided and applied to the computer terminal, and fig. 2 is a flowchart of a method for controlling a scene feature according to an embodiment of the present invention, where the flowchart includes the following steps:
step S202, detecting that a target object enters a target scene;
step S204, extracting the characteristics of the target object to obtain the characteristics of the target object;
step S206, controlling the target scene to display target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
Through the steps, the target object is detected to enter the target scene; extracting characteristics of a target object to obtain characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics, namely detecting the object entering the target scene in the target scene, extracting the target object characteristics of the target object if the target object is detected, and controlling the target scene to display the corresponding target scene characteristics according to the target object characteristics so that the target scene is matched with the characteristics of the target object entering the target scene. By adopting the technical scheme, the problems of low feature matching degree of the scene and the objects in the scene and the like in the related technology are solved, and the technical effect of improving the feature matching degree of the scene and the objects in the scene is realized.
In the solution provided in step S202, the target scenario may include, but is not limited to, any type of scenario allowing intelligent control, such as: homes, garages, offices, classrooms, teaching buildings, laboratories, pastures, etc.
Alternatively, in the present embodiment, the target object may include, but is not limited to: humans (users or user-specified humans), animals (pets, poultry, livestock), and the like. Scene characteristics may be controlled, but are not limited to, by priority of objects entering the target scene, such as: and if the priority of the user B is higher than the priority of the user A, determining the user B as a target object, and displaying the characteristics of the control target scene matching the characteristics of the user B.
Optionally, in this embodiment, the scene features may also be controlled according to, but not limited to, the time sequence of the object entering the target scene, for example: the user A firstly enters the target scene, and features of the target scene are controlled to be matched with the features of the user A for display. And then the user B enters a target scene, and the characteristics of the target scene still match the characteristics of the user A for display.
Alternatively, in this embodiment, the above-mentioned process of determining the target object according to the priority may have a higher priority than the process of determining the target object according to the order of entering the target scene, for example: the user A firstly enters the target scene, and features of the target scene are controlled to be matched with the features of the user A for display. And then the user B enters a target scene, if the priority of the user B is higher than that of the user A, the user B is determined to be a target object, and the characteristics of the control target scene are matched with the characteristics of the user B for display. If the priority of the first user is higher than that of the second user, the characteristics of the target scene still match the characteristics of the first user for display.
Alternatively, in the present embodiment, the target object entering the target scene may be detected in the target scene, but is not limited to, in the following manner: detecting that an object enters the target scene; performing biological feature recognition on the object entering the target scene to obtain an object identifier; matching the object identifier with identifiers recorded in an identifier list, wherein the identifier list records identifiers of objects bound with the target scene; and determining that the target object is detected in the target scene under the condition that the object identification is successfully matched with the identifications recorded in the identification list.
Alternatively, in the present embodiment, an object entering the target scene may be detected by, but not limited to, using biometric identification, such as: facial recognition, iris recognition, fingerprint recognition, voiceprint recognition, and the like.
Optionally, in this embodiment, the user may configure the target scene with the object bound thereto in advance, and store the identification of the target scene and the bound object having the correspondence relationship for detection of the target object. Such as: the user may, but is not limited to, bind objects such as family, friends, pets, etc. to the target scene, and may, but is not limited to, set a priority of the bound objects on the target scene, such as: family priority is higher than friends ' priority, friends ' priority is higher than pets ' priority.
In one exemplary embodiment, the entry of a target object into a target scene may be detected, but is not limited to, by: detecting that the target object enters a target environment; identifying an object identifier corresponding to the target object; and determining an area corresponding to the object identifier in the target environment as the target scene.
Alternatively, in the present embodiment, the target scene may be, but is not limited to, a part of an area in the target environment, such as: different houses in a building may be, but are not limited to being, different scenes, and different areas in a house may be, but are not limited to being, different scenes.
Alternatively, in this embodiment, each scene may, but is not limited to, correspond to one or more objects, and entry of the one or more objects into the scene may have an effect on scene characteristics of the scene. Such as: a family includes a three-port, has divided a plurality of scenes in the environment of this family's house, including living room, kitchen, and the primary bedroom that dad's mother lived and child's secondary bedroom that the child lived, living room and bathroom can correspond all three people, and the kitchen corresponds the mother, and the primary bedroom corresponds father and mother, and secondary bedroom corresponds child. If the mother is detected to return home, the scene characteristics of the living room, the kitchen, the toilet and the master bedroom are controlled to accord with the current object characteristics of the mother. If the child is detected to come home, the living room is controlled, and scene characteristics of the toilet and the secondary bedroom accord with object characteristics of the child.
In the technical solution provided in step S204, the method for extracting the features of the target object may include, but is not limited to, feature extraction algorithm, AI model identification, and so on.
Alternatively, in the present embodiment, the target object features of the target object may include, but are not limited to, features of one or more dimensions, such as: character features, mood features, appearance features (e.g., long-phase, skin tone, hairstyle, color, clothing brand, etc.), physiological index features (e.g., heart rate, pressure value, blood oxygen saturation, etc.).
Alternatively, in the present embodiment, feature extraction of the target object may include, but is not limited to, feature extraction from historical network data of the target object and feature extraction from a state when the target object currently enters the target scene. Different feature extraction modes can extract different types of features and can also extract the same types of features.
In one exemplary embodiment, the target object may be extracted by, but is not limited to, the following ways to obtain the target object feature: collecting color signals emitted by the target object, wherein the color signals comprise at least one of the following: mood signals and appearance signals; the color feature is extracted from the color signal as the target object feature.
Optionally, in this embodiment, the color signal emitted by the target object is at least derived from one of the following: mood signals and appearance signals. Wherein the mood signal may include, but is not limited to: speech and expression of the target object, etc. Appearance signals may include, but are not limited to: long-term phase of the target object, skin color, hairstyle, hair color, clothing brand, and the like.
Alternatively, in the present embodiment, the target scene may be controlled to exhibit the scene characteristics corresponding thereto according to, but not limited to, the emotion and the color of the appearance of the target object when entering the target scene.
Optionally, in this embodiment, the target emotional characteristic of the target object may be, but is not limited to, used to characterize the current emotion of the target object, such as: happiness, sadness, excitement, anger, depression, etc.
Alternatively, in the present embodiment, the appearance feature of the target object may be, but is not limited to, used to characterize the current appearance hue of the target object, such as: clothing tone, hair color, skin tone, and the like.
Alternatively, in the present embodiment, the target emotion feature may be, but not limited to, extracted from a voice signal emitted from the target object and an expression signal exhibited by the target object face, and the target color feature may be, but not limited to, extracted from a clothing color of the target object.
Optionally, in this embodiment, when the target object is detected to enter the target scene, a voice signal sent by the target object is collected, voice analysis is performed on the voice signal to obtain an emotion feature carried in the voice signal, an expression signal exhibited by the target object is collected, facial expression analysis is performed on the expression signal to obtain an emotion feature carried in the expression signal, and feature fusion is performed on the emotion feature carried in the voice signal and the emotion feature carried in the expression signal to obtain the target emotion feature.
Optionally, in this embodiment, when it is detected that the target object enters the target scene, the clothing color of the target object is identified, for example: the colors of clothes, trousers, skirt, hat, shoes, gloves, socks, accessories, etc. are identified as the target color signals, and the target color features are extracted from the target color signals.
In the technical solution provided in step S206, the automatic identification and matching from the target object feature to the target scene feature may be performed by using, but not limited to, an AI model.
Alternatively, in the present embodiment, the target scene features may include, but are not limited to, colors shown in the scene, lights in the scene, images shown in the scene, and so forth.
In one exemplary embodiment, the target scene may be controlled to exhibit target scene features corresponding to the target object features according to the target object features by, but not limited to: determining corresponding scene characteristics as the target scene characteristics according to the color characteristics included in the target object characteristics; and controlling the target scene to display the target scene characteristics.
Alternatively, in the present embodiment, the target object features may include, but are not limited to, color features, which may be, but are not limited to, emotions and looks derived from the target object.
Optionally, in this embodiment, the target scene feature to be displayed in the target scene corresponds to the color feature of the target object, and the corresponding manner may be compliant or opposite. Such as: the color features of the target object are the hues of the shade, so that the hues of the shade can be matched and used as the target scene features, or the hues of the sunlight can be matched and used as the target scene features, and the emotion of the shade of the target object is relieved. The color features of the target object are the colors of the comparative sunlight, and the colors of the comparative sunlight can be matched to serve as the target scene features so as to cater to the hot emotion of the target object, or the colors of the comparative calm can be matched to serve as the target scene features so as to play a certain role in neutralizing the hot emotion of the target object.
In one exemplary embodiment, based on the color features included in the target object feature, a corresponding scene feature may be determined as the target scene feature in one of, but not limited to:
in one embodiment, the method may include, but is not limited to, the following steps:
step 11, inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features marked with color labels of the target object, and the historical object features are extracted from historical network data of the target object;
and step 12, acquiring the target scene characteristics output by the first recognition model.
Alternatively, in this embodiment, the first recognition model is trained using the historical object features labeled with the color label of the target object, and the historical object features may be, but are not limited to, extracted from the historical network data of the target object. Historical network data may include, but is not limited to, shopping data including target objects, transaction data, chat data, social data, and the like.
Alternatively, in the present embodiment, the first recognition model may be, but is not limited to being, trained by the following training process: the method comprises the steps of inputting historical object features into an initial recognition model to obtain initial scene features output by the initial recognition model, adjusting model parameters of the initial recognition model according to differences between the initial scene features and color labels marked by the historical object features until the differences between the initial scene features and the color labels marked by the historical object features meet preset conditions, and determining model parameters which enable the differences between the initial scene features and the color labels marked by the historical object features to meet the preset conditions as model parameters used by a first recognition model to obtain the first recognition model. Or, adjusting the model parameters of the initial recognition model according to the difference between the color labels marked by the initial scene characteristics and the historical object characteristics, stopping adjusting the model parameters until the adjustment times reach the preset adjustment times, prompting an event of stopping model training, and training the model after the technician adjusts the model (such as adjusting the super parameters).
Alternatively, in this embodiment, in the first manner, the color feature of the target object is input into the first recognition model, and the data output by the first recognition model is directly obtained as the target scene feature.
Mode two, may, but is not limited to, include the steps of:
step 21, inputting target emotion features included in the color features into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion features marked with color labels of the target object, and the historical emotion features are extracted from historical network data of the target object;
step 22, obtaining a target color label output by the second recognition model;
and step 23, performing color fusion on the target color label and target appearance features included in the color features to obtain the target scene features.
Optionally, in this embodiment, the second recognition model is obtained by training the initial recognition model using a historical emotion feature labeled with a color label of the target object, and the historical emotion feature is extracted from historical network data of the target object. Historical network data may include, but is not limited to, shopping data including target objects, transaction data, chat data, social data, and the like.
Alternatively, in the present embodiment, the second recognition model may be, but is not limited to being, trained by the following training process: the method comprises the steps of inputting historical object features into an initial recognition model to obtain an initial color label output by the initial recognition model, adjusting model parameters of the initial recognition model according to differences between the initial color label and color labels marked by the historical emotion features until the differences between the initial color label and the color labels marked by the historical emotion features meet preset conditions, and determining model parameters which enable the differences between the initial color label and the color labels marked by the historical emotion features to meet the preset conditions as model parameters used by a second recognition model, so that the second recognition model is obtained. Or, adjusting the model parameters of the initial recognition model according to the difference between the initial color label and the color label marked by the historical emotion characteristics until the adjustment times reach the preset adjustment times, stopping the adjustment of the model parameters, prompting the event of stopping the model training, and performing the training of the model after the technician adjusts the model (such as the adjustment of the super parameters).
Alternatively, in the present embodiment, in the second manner described above, the target emotion feature of the target object is input into the second recognition model, and the data output by the second recognition model is acquired as the target color label. And performing color fusion on the target color label and the target color characteristics included in the target object characteristics, thereby obtaining target scene characteristics.
Alternatively, in the present embodiment, the model types of the initial recognition model for training out the first recognition model and the initial recognition model for training out the second recognition model may be the same or different, and the model structures of the two may be the same or different.
In the technical solution provided in step S208, the positions for displaying the target scene features in the target scene may include, but are not limited to: wall surfaces, ceilings, electronic devices, home devices, furniture, home devices, etc. in the target scene.
Alternatively, in the present embodiment, the manner of implementing the target scene feature presentation on the target scene may include, but is not limited to, holographic projection technology, material technology, light control technology, and the like.
In one exemplary embodiment, the target scene presentation target scene features may be controlled, but are not limited to, in the following manner: controlling the target scene to display target colors included in the target scene characteristics to obtain a target color scene; and displaying the target image included by the target scene characteristics on the target color scene.
Alternatively, in the present embodiment, the target scene features may include, but are not limited to, target colors for changing the scene colors, and target images for displaying the scene patterns. The scene color may first be converted to a target color and then the target image may be presented on the target color.
In order to better understand the process of the above-mentioned control method of the scene feature, the implementation method flow of the above-mentioned control process of the scene feature is described below in conjunction with the optional embodiment, but the implementation method flow is not limited to the technical solution of the embodiment of the present invention.
In this embodiment, a method for controlling a scene feature is provided, and fig. 3 is a schematic diagram of a method for controlling a scene feature according to an embodiment of the present invention, as shown in fig. 3, specifically including the following steps:
step S301: performing voice analysis on the collected user voice through intelligent voice recognition, and recognizing the current mood, the current language emotion and the like as part of emotion characteristics;
step S302: performing dressing color recognition on the shot user picture through image analysis, and recognizing the current wearing color of the user as an appearance characteristic;
step S303: identifying facial images of the user through facial expression analysis, and identifying the current facial emotion of the user as another part of emotion characteristics;
step S304: through AI customized analysis, various characteristics are customized and output into a set of system colors and images;
step S305: the holographic projection technology or the surface material technology is used for realizing the display of colors and images in a target scene, so that the skin change of household appliances, home furnishings and the like is realized.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the various embodiments of the present invention.
Fig. 4 is a block diagram of a scene feature control device according to an embodiment of the present invention; as shown in fig. 4, includes:
a detection module 42, configured to detect that a target object enters a target scene;
an extracting module 44, configured to perform feature extraction on the target object to obtain a target object feature;
the control module 46 is configured to control the target scene to display a target scene feature corresponding to the target object feature according to the target object feature.
Through the above embodiment, the entry of the target object into the target scene is detected; extracting characteristics of a target object to obtain characteristics of the target object; and controlling the target scene to display the target scene characteristics corresponding to the target object characteristics according to the target object characteristics, namely detecting the object entering the target scene in the target scene, extracting the target object characteristics of the target object if the target object is detected, and controlling the target scene to display the corresponding target scene characteristics according to the target object characteristics so that the target scene is matched with the characteristics of the target object entering the target scene. By adopting the technical scheme, the problems of low feature matching degree of the scene and the objects in the scene and the like in the related technology are solved, and the technical effect of improving the feature matching degree of the scene and the objects in the scene is realized.
In one exemplary embodiment, the control module includes: a first determining unit, configured to determine, according to color features included in the target object feature, a corresponding scene feature as the target scene feature; and the control unit is used for controlling the target scene to display the target scene characteristics.
In one exemplary embodiment, the extraction module includes: the acquisition unit is used for acquiring color signals sent by the target object, wherein the color signals comprise at least one of the following components: mood signals and appearance signals; an extraction unit configured to extract the color feature from the color signal as the target object feature.
In an exemplary embodiment, the first determining unit is configured to: inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features marked with the color labels of the target object, and the historical object features are extracted from historical network data of the target object; and acquiring the target scene characteristics output by the first recognition model.
In an exemplary embodiment, the first determining unit is configured to: inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with color labels of the target object, and the historical emotion characteristics are extracted from historical network data of the target object; acquiring a target color label output by the second recognition model; and performing color fusion on the target color label and target appearance features included by the color features to obtain the target scene features.
In an exemplary embodiment, the control unit is configured to: controlling the target scene to display target colors included in the target scene characteristics to obtain a target color scene; and displaying the target image included by the target scene characteristics on the target color scene.
In an exemplary embodiment, the detection module includes: the detection unit is used for detecting that the target object enters a target environment; the identification unit is used for identifying an object identifier corresponding to the target object; and the second determining unit is used for determining the area corresponding to the object identifier in the target environment as the target scene.
An embodiment of the present invention also provides a storage medium including a stored program, wherein the program executes the method of any one of the above.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store program code for performing the steps of:
s1, detecting that a target object enters a target scene;
s2, extracting features of the target object to obtain target object features;
and S3, controlling the target scene to display target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, detecting that a target object enters a target scene;
s2, extracting features of the target object to obtain target object features;
and S3, controlling the target scene to display target scene characteristics corresponding to the target object characteristics according to the target object characteristics.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for controlling scene features, comprising:
detecting that a target object enters a target scene;
extracting the characteristics of the target object to obtain the characteristics of the target object;
controlling the target scene to display target scene characteristics corresponding to the target object characteristics according to the target object characteristics;
wherein, according to the target object feature, the target scene is controlled to display the target scene feature corresponding to the target object feature, including:
determining corresponding scene characteristics as the target scene characteristics according to the color characteristics included in the target object characteristics;
controlling the target scene to display the characteristics of the target scene;
wherein determining, according to the color features included in the target object feature, a corresponding scene feature as the target scene feature includes:
inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with color labels of the target object, and the historical emotion characteristics are extracted from historical network data of the target object;
acquiring a target color label output by the second recognition model;
and performing color fusion on the target color label and target appearance features included by the color features to obtain the target scene features.
2. The method for controlling scene features according to claim 1, wherein the feature extraction of the target object to obtain the target object feature comprises:
collecting color signals emitted by the target object, wherein the color signals comprise at least one of the following: mood signals and appearance signals;
the color feature is extracted from the color signal as the target object feature.
3. The method of claim 1, wherein determining a corresponding scene feature as the target scene feature based on the color features included in the target object feature, comprises:
inputting the color features into a first recognition model corresponding to the target object, wherein the first recognition model is obtained by training an initial recognition model by using historical object features marked with the color labels of the target object, and the historical object features are extracted from historical network data of the target object;
and acquiring the target scene characteristics output by the first recognition model.
4. The method for controlling a scene feature according to claim 1, wherein controlling the target scene to exhibit the target scene feature comprises:
controlling the target scene to display target colors included in the target scene characteristics to obtain a target color scene;
and displaying the target image included by the target scene characteristics on the target color scene.
5. The method of controlling scene characteristics according to any one of claims 1 to 4, characterized in that detecting entry of the target object into the target scene comprises:
detecting that the target object enters a target environment;
identifying an object identifier corresponding to the target object;
and determining an area corresponding to the object identifier in the target environment as the target scene.
6. A scene feature control device, comprising:
the detection module is used for detecting that the target object enters the target scene;
the extraction module is used for extracting the characteristics of the target object to obtain the characteristics of the target object; the control module is used for controlling the target scene to display target scene characteristics corresponding to the target object characteristics according to the target object characteristics;
wherein, the control module includes: a first determining unit, configured to determine, according to color features included in the target object feature, a corresponding scene feature as the target scene feature;
the control unit is used for controlling the target scene to display the target scene characteristics;
wherein the first determining unit is configured to: inputting target emotion characteristics included in the color characteristics into a second recognition model corresponding to the target object, wherein the second recognition model is obtained by training an initial recognition model by using historical emotion characteristics marked with color labels of the target object, and the historical emotion characteristics are extracted from historical network data of the target object; acquiring a target color label output by the second recognition model; and performing color fusion on the target color label and target appearance features included by the color features to obtain the target scene features.
7. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any of the preceding claims 1 to 5.
8. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 5 by means of the computer program.
CN202110681521.8A 2021-06-18 2021-06-18 Scene characteristic control method and device, storage medium and electronic device Active CN113569634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110681521.8A CN113569634B (en) 2021-06-18 2021-06-18 Scene characteristic control method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110681521.8A CN113569634B (en) 2021-06-18 2021-06-18 Scene characteristic control method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN113569634A CN113569634A (en) 2021-10-29
CN113569634B true CN113569634B (en) 2024-03-26

Family

ID=78162323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110681521.8A Active CN113569634B (en) 2021-06-18 2021-06-18 Scene characteristic control method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN113569634B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018023515A1 (en) * 2016-08-04 2018-02-08 易晓阳 Gesture and emotion recognition home control system
CN107797459A (en) * 2017-09-15 2018-03-13 珠海格力电器股份有限公司 Control method, device, storage medium and the processor of terminal device
CN107853885A (en) * 2017-11-08 2018-03-30 邓鹏� It is a kind of based on stroke and business traveller's health status, the Intelligent clothes cabinet of mood of going on business
CN108115695A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of emotional color expression system and robot
CN110671795A (en) * 2019-11-29 2020-01-10 北方工业大学 Livable environment system based on artificial intelligence and use method thereof
CN111447124A (en) * 2020-04-02 2020-07-24 张瑞华 Intelligent household control method and intelligent control equipment based on biological feature recognition
CN111741116A (en) * 2020-06-28 2020-10-02 海尔优家智能科技(北京)有限公司 Emotion interaction method and device, storage medium and electronic device
CN112764352A (en) * 2020-12-21 2021-05-07 深圳创维-Rgb电子有限公司 Household environment adjusting method and device, server and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018218486A1 (en) * 2017-05-31 2018-12-06 Beijing Didi Infinity Technology And Development Co., Ltd. Devices and methods for recognizing driving behavior based on movement data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018023515A1 (en) * 2016-08-04 2018-02-08 易晓阳 Gesture and emotion recognition home control system
CN108115695A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of emotional color expression system and robot
CN107797459A (en) * 2017-09-15 2018-03-13 珠海格力电器股份有限公司 Control method, device, storage medium and the processor of terminal device
CN107853885A (en) * 2017-11-08 2018-03-30 邓鹏� It is a kind of based on stroke and business traveller's health status, the Intelligent clothes cabinet of mood of going on business
CN110671795A (en) * 2019-11-29 2020-01-10 北方工业大学 Livable environment system based on artificial intelligence and use method thereof
CN111447124A (en) * 2020-04-02 2020-07-24 张瑞华 Intelligent household control method and intelligent control equipment based on biological feature recognition
CN111741116A (en) * 2020-06-28 2020-10-02 海尔优家智能科技(北京)有限公司 Emotion interaction method and device, storage medium and electronic device
CN112764352A (en) * 2020-12-21 2021-05-07 深圳创维-Rgb电子有限公司 Household environment adjusting method and device, server and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Review on Human-Centered IoT-Connected Smart Labels for the Industry 4.0;TIAGO M 等;《SPECIAL SECTION ON HUMAN-CENTERED SMART SYSTEMS AND TECHNOLOGIES》;25939-25957 *
基于传感网络的智能家居舒适度测控系统;王仲;《中国优秀硕士学位论文全文数据库 信息科技辑》(第10期);I140-1132 *
基于用户场景的智能家居交互设计研究;吴宇;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第06期);C038-945 *

Also Published As

Publication number Publication date
CN113569634A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN104582187A (en) Recording and lamplight control system and method based on face recognition and facial expression recognition
CN110535732A (en) A kind of apparatus control method, device, electronic equipment and storage medium
CN105843051A (en) Smart home system, control device and control method of smart home
CN108681390B (en) Information interaction method and device, storage medium and electronic device
CN105629762B (en) The control device and method of smart home
CN108917113A (en) Assistant voice control method, device and air-conditioning
CN109377995B (en) Method and device for controlling equipment
CN107330418B (en) Robot system
CN105912632A (en) Device service recommending method and device
CN108279777B (en) Brain wave control method and related equipment
CN109343481B (en) Method and device for controlling device
CN113569634B (en) Scene characteristic control method and device, storage medium and electronic device
CN111158258A (en) Environment monitoring method and system
CN104881633A (en) Color blindness mode starting method and intelligent glasses
CN111240220A (en) Equipment control method and device
CN113160475A (en) Access control method, device, equipment and computer readable storage medium
CN108734082A (en) Method for building up, device, equipment and the storage medium of correspondence
CN105868606A (en) Intelligent terminal control device and method
CN115171153A (en) Object type determination method and device, storage medium and electronic device
CN106056063A (en) Recognition and control system of robot
CN115481284A (en) Cosmetic method and device based on cosmetic box, storage medium and electronic device
CN111007806B (en) Smart home control method and device
CN108724203A (en) A kind of exchange method and device
CN110824930B (en) Control method, device and system of household appliance
CN106407421A (en) A dress-up matching evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant