CN117706954B - Method and device for generating scene, storage medium and electronic device - Google Patents

Method and device for generating scene, storage medium and electronic device Download PDF

Info

Publication number
CN117706954B
CN117706954B CN202410167006.1A CN202410167006A CN117706954B CN 117706954 B CN117706954 B CN 117706954B CN 202410167006 A CN202410167006 A CN 202410167006A CN 117706954 B CN117706954 B CN 117706954B
Authority
CN
China
Prior art keywords
scene
data
user
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410167006.1A
Other languages
Chinese (zh)
Other versions
CN117706954A (en
Inventor
田云龙
赵乾
牛丽
杜永杰
郭义合
徐静
刘朝振
窦方正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Qingdao Haier Intelligent Home Appliance Technology Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Qingdao Haier Intelligent Home Appliance Technology Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Qingdao Haier Intelligent Home Appliance Technology Co Ltd, Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202410167006.1A priority Critical patent/CN117706954B/en
Publication of CN117706954A publication Critical patent/CN117706954A/en
Application granted granted Critical
Publication of CN117706954B publication Critical patent/CN117706954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method and a device for generating a scene, a storage medium and an electronic device, and relates to the technical field of smart families, wherein the method for generating the scene comprises the following steps: acquiring the requirement of a user; inputting the requirements of a user into a first scene generation model to obtain a first scene scheme; the training data type of the first scene generation model comprises user data, equipment data, environment data and space data; and controlling the target equipment according to the first scene scheme to enable the target equipment to generate a scene. The first scene generation model is trained by utilizing various data such as user data, equipment data, environment data, space data and the like, so that the trained first scene generation model can be combined with a plurality of factors such as users, equipment, environment, space and the like to generate scenes. In this way, the generated scene is closer to the requirements of the user, so that the user experience is improved.

Description

Method and device for generating scene, storage medium and electronic device
Technical Field
The application relates to the technical field of smart families, in particular to a method and device for generating scenes, a storage medium and an electronic device.
Background
At present, with the development of information technology and the Internet, intelligent home products such as intelligent lighting, intelligent security systems, intelligent entertainment equipment and the like are more and more abundant. These devices connect over a network and collect data, providing many convenient functions for the user. However, to fully utilize these devices, the user needs to set a series of complex scene modes, such as away from home mode, return home mode, sleep mode, etc. This scene setting process is time consuming and complex. Therefore, there is a need to improve the intelligence of scene generation.
In order to generate various scenes more intelligently, the related art discloses a scene generation method based on a large generation model, which comprises the following steps: identifying interaction data of a target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling intelligent equipment; inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction; and inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
although the related art can generate a scene through the generated large model, the generated scene may have a large difference from the needs of the user, resulting in poor user experience.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview, and is intended to neither identify key/critical elements nor delineate the scope of such embodiments, but is intended as a prelude to the more detailed description that follows.
The embodiment of the disclosure provides a method for generating a scene, a storage medium and an electronic device, which can enable the generated scene to better meet the requirements of users.
In some embodiments, the method comprises: acquiring the requirement of a user; inputting the requirements of a user into a first scene generation model to obtain a first scene scheme; the training data type of the first scene generation model comprises user data, equipment data, environment data and space data; the target device is controlled according to the first scenario scheme.
Optionally, after controlling the target device according to the first scenario solution, the method further includes: acquiring scene data; updating the first scene scheme according to the scene data to obtain a second scene scheme; and controlling the target device according to the second scene scheme.
Optionally, before obtaining the requirement of the user, the method further includes: acquiring a scene vector library; training a first scene generation model according to the scene vector library; wherein the first scene generation model comprises part or all of the plurality of lightweight models.
Optionally, obtaining the scene vector library includes: constructing a hierarchical knowledge graph; the knowledge in the hierarchical knowledge graph is divided into a plurality of different hierarchies according to the degree of correlation and the importance level; acquiring a plurality of first scene vectors according to the hierarchical knowledge graph; training a plurality of first scene vectors to obtain a scene vector library.
Optionally, acquiring the plurality of first scene vectors according to the hierarchical knowledge-graph includes: acquiring user data, environment data and space data; vectorizing the user data, the environment data, the space data and the equipment data in the hierarchical knowledge graph to obtain a plurality of groups of vector data; and extracting single vector data in each group of vector data and combining the single vector data to obtain a plurality of first scene vectors.
Optionally, constructing a hierarchical knowledge graph includes: acquiring a knowledge base in the intelligent family field; constructing a primary knowledge graph according to the knowledge base; training the primary knowledge graph to obtain a hierarchical knowledge graph.
Optionally, training the primary knowledge-graph includes: clustering the primary knowledge graph; and grading the clustered primary knowledge graph.
Optionally, after controlling the target device according to the first scenario solution, the method further includes: acquiring satisfaction degree of a user on a first scene scheme; and updating the first scene generation model according to the satisfaction degree of the user.
Optionally, according to the satisfaction of the user, the method includes: evaluating the first scene scheme according to the satisfaction degree of the user to obtain an evaluation result; wherein the evaluation result comprises positive evaluation or negative evaluation; inputting the evaluation result into a knowledge base in the intelligent family field to update a first scene generation model; or inputting the evaluation result into a condition generation countermeasure network, and updating the first scene generation model.
Optionally, training a plurality of first scene vectors to obtain a scene vector library, including: training a plurality of first scene vectors to obtain an initial scene vector library; training an initial scene vector library by using a condition generation countermeasure network to obtain a second scene vector; and adding the second scene vector to the initial scene vector library to obtain a scene vector library.
Optionally, obtaining the requirement of the user includes: receiving request data of a user; adding background information in the request data; wherein the context information includes user data, device data, environmental data, and spatial data; and determining the requirement of the user according to the request data added with the background information.
Optionally, determining the requirement of the user according to the request data added with the background information includes: processing the request data added with the background information to obtain fused request data; and transmitting the fused request data to a front-end classifier to obtain the requirements of the user.
Optionally, adding context information in the request data includes: collecting background information; wherein the context information includes user data, device data, environmental data, and spatial data; the request data and the context information are combined.
Optionally, processing the request data to which the background information is added includes: and carrying out domain vectorization processing on the request data added with the background information.
In some embodiments, the apparatus for scene generation comprises: the user interaction module is configured to acquire the requirements of a user and input the requirements of the user into the first scene generation model; the model application module comprises a first scene generation model and is configured to output a first scene scheme under the condition that the requirement of a user is received; the training data type of the first scene generation model comprises user data, equipment data, environment data and space data; a control module configured to control the target device according to a first scenario scheme; the model training module is configured to train a first scene generation model.
In some embodiments, the computer-readable storage medium includes a stored program, wherein the program when run performs the method for scene generation described above.
In some embodiments, the electronic device comprises a memory in which a computer program is stored and a processor arranged to perform the above-described method for scene generation by means of the computer program.
The method for generating the scene, the storage medium and the electronic device provided by the embodiment of the disclosure can realize the following technical effects:
The embodiment of the disclosure trains the first scene generating model by utilizing various data such as user data, equipment data, environment data, space data and the like, so that the first scene generating model can better understand entities and relations among the entities in the home environment. Therefore, the first scene generation model can combine multiple factors such as a user, equipment, environment and space to generate a scene, and the generated scene is closer to the requirements of the user, so that the user experience is improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment for a method of scene generation according to an embodiment of the present disclosure;
FIG. 2 is a software system schematic diagram of a method for scene generation according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a method for scene generation according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another method for scene generation according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a method for constructing a hierarchical knowledge-graph, in accordance with an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another method for scene generation according to an embodiment of the present disclosure;
FIG. 7 is an apparatus schematic diagram of a method for scene generation according to an embodiment of the disclosure;
Fig. 8 is a schematic diagram of an electronic device for a method of scene generation according to an embodiment of the disclosure.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the present application (in the described embodiments) are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Currently, the existing smart home scene setting mode mainly depends on manual setting of a user. First, the setup process consumes a lot of time, and the user must set the state and parameters of each device one by one, which process needs to be repeated for different scenarios. Secondly, the difficulty of the setting mode is high, and a user needs to have certain equipment knowledge and operation skills to accurately finish the setting. Finally, due to the difference between each home environment and the user demands, the manually set scenes often cannot accurately meet the personalized demands of the user, and the flexibility is insufficient.
According to one aspect of an embodiment of the present disclosure, a method for scene generation is provided. The method for generating the scene is widely applied to full-house intelligent digital control application scenes such as intelligent Home (Smart Home), intelligent Home equipment ecology, intelligent Home (INTELLIGENCE HOUSE) ecology and the like. Alternatively, in the embodiment of the present disclosure, the method for generating a scene described above may be applied to a hardware environment composed of the smart home device 01 and the control center device 02 as shown in fig. 1. As shown in fig. 1, the control center device 02 is connected with the intelligent home appliance device 01 through a network, and can be used for providing services (such as application services and the like) for a terminal or a client installed on the terminal, a database can be set on a server or independent of the server, and used for providing data storage services for the control center device 02, and cloud computing and/or edge computing services can be configured on the server or independent of the server, and used for providing data computing services for the control center device 02.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (WIRELESS FIDELITY ), bluetooth. The intelligent household electrical appliance 01 can be not limited to a PC, a mobile phone, a tablet personal computer, an intelligent air conditioner, an intelligent smoke machine, an intelligent refrigerator, an intelligent oven, an intelligent cooking range, an intelligent washing machine, an intelligent water heater, an intelligent washing device, an intelligent dish washer, an intelligent projection device, an intelligent television, an intelligent clothes hanger, an intelligent curtain, an intelligent video, an intelligent socket, an intelligent sound box, an intelligent fresh air device, an intelligent kitchen and toilet device, an intelligent bathroom device, an intelligent sweeping robot, an intelligent window cleaning robot, an intelligent mopping robot, an intelligent air purifying device, an intelligent steam box, an intelligent microwave oven, an intelligent kitchen appliance, an intelligent purifier, an intelligent water dispenser, an intelligent door lock and the like.
The control center device 02 may not be limited to a cloud server, a central controller, an intelligent home gateway, or the like.
In connection with the hardware environment shown in fig. 1, a software system for scene generation is provided in an embodiment of the present disclosure. The control center device 02 is provided with a user interaction module 03, a model application module 04, a control module 05 and a model training module 06, as shown in fig. 2.
The user interaction module 03 is configured to obtain a user's requirements and input the user's requirements into the first scene generation model.
Optionally, the user interaction module 03 includes a receiving module, an adding module, a processing module, and a classifying module. The receiving module is capable of receiving request data of a user. The adding module can add context information to the request data. Wherein the context information includes user data, device data, environment data, and spatial data. The processing module can process the request data added with the background information to obtain the fused request data. The classification module comprises a classifier, and can transmit the fused request data to the classifier. Wherein the classifier is used for classifying the request of the user.
The user interaction module 03 can acquire background information of the current smart home after the user sends out the request data. The background information contains a lot of information related to the scene. By adding such information to the request data, the content of the request data can be made richer, and more effective information can be contained for the classifier to understand and classify. The vectorization processing module can perform domain vectorization processing on the request data added with the background information, and finally input the request data into the classifier for classification. Therefore, the processed data is easier to be understood by the classifier, and the classification result of the classifier is more accurate. Finally, the accuracy of the classification model is improved.
The model application module 04 comprises a first scene generation model configured to output a first scene scheme upon receiving a user's demand; wherein the training data type of the first scene generation model includes user data, device data, environment data, and spatial data.
Training data of the first scene generation model is stored in the background information base 09. The background information base 09 includes user data, device data, environment data, and space data.
The user data includes basic attributes, behavior habits, preferences and preferences, common equipment and the environment in which the user is located.
User data is extremely important in the smart home environment, and can help the system understand and predict the behavior of the user, so that the service is more personalized and efficient. Basic attributes include the sex, age, occupation, etc. of the user. These tags can help the system understand the general behavioral tendencies of the user. For example, a young white collar may be more familiar with smart phones to control home devices. The behavior habit comprises records of the past use of the intelligent home equipment by the user, and the behavior habit can describe a comprehensive behavior mode. For example, a user often uses a coffee machine in 7 a.m., the use time and device identification are taken as one of the user's behavioral habits data, which the system can learn and warm up the coffee machine in time before the use time. The preferences and preferences include a record of user control devices and configuration settings that the system can also learn. For example, a user may like to watch a movie in the evening, and the user may often dim the living room lights and turn on home theater mode. Common devices include records of the devices most commonly used by users. For example, if a user often cooks in a kitchen and prefers to hear an audio reading, the sound is determined to be the user's usual device, or one of the usual devices. The environment in which the user is located includes activity patterns in different spatial environments. For example, a user who is often working in a study will exhibit a different activity pattern than a user who is resting in a living room.
The device data includes: device basic information, device status, device capabilities, device location.
Devices are a key component of a smart home environment. The basic information of the device includes the model, brand, model and the like of the device, and the information is helpful for identifying the type of the device and knowing the basic function of the device. The device state includes a switch state and a current operating state of the device. Such state information is critical to understanding the current situation of the device and predicting the operations that may need to be performed. For example, knowing that the air conditioner is running and is set at 22 ℃, the system may need to raise the temperature of the air conditioner setting if the user feels too cold. Device capabilities include functions that the device can perform. For example, intelligent lights may adjust brightness and color. And the intelligent sound box can play music, set timing and the like. The system needs to know the characteristics of the device and how the device responds to a particular command. Each device in a home environment will be in a particular spatial location. This location information will influence the understanding of the user instructions and subsequent operating strategies. For example, if it is known that a voice command is issued from a bedroom and the command content is "turn off the light" then the light located in the bedroom should be turned off.
The environmental data includes: somatosensory environment, indoor air quality, time and season, room traffic and special conditions.
The environment has a decisive influence on the system's understanding of the user's needs and making accurate decisions. The somatosensory environment comprises indoor temperature, humidity, illumination, noise and other environmental parameter information. For example, the indoor temperature may affect a thermostatic device. While indoor lighting conditions change may change the user's demand for intelligent lights. Indoor air quality may require a user to turn on an air purifier or a roller blind. The time and season include the current point in time, date, season, etc. For example, entering the night requires the light to be turned on, while in winter it may be necessary to raise the temperature of the thermostatic device. The room traffic includes whether someone is in the room and the movement of the person. For example, a room with no person may turn off the lights to save energy, while if someone enters it may be necessary to turn on the lighting. Special cases include special events that also affect the environment, such as whether a party is in progress or a guest is coming.
Spatial data includes room layout, room functions, device distribution, and spatial dimensions.
Spatial data refers to physical attributes and features of the environment in which the smart home system is located. The room layout includes the positional relationship of the individual rooms in the home and the house type design. Such information may affect how the devices are distributed and installed, and may also affect the user's movement and use of the devices at home. Each room has specific functions, e.g. bedrooms for rest, study rooms for work or study, living rooms for social or recreational activities. Knowing the function of each room can help the system understand what activities a user may be doing at a particular location or what needs the device. The device distribution can learn the intelligent home devices contained in each room. And the relative position between the devices is also important for accurate control of the devices. Spatial dimensions include the size, angle, and height of a space or venue. For example, when controlling a window shade, it is necessary to consider the stroke required for window shade opening and closing control according to the width of the window.
A control module 05 configured to control the target device according to the first scenario scheme.
Model training module 06 is configured to train a first scene generation model. Model training module 06 includes: knowledge base 061, hierarchical knowledge graph 062, scene vector library 063 and reinforcement learning model 07 in the intelligent family domain.
In the knowledge base 061 of the smart home domain, structured and unstructured data are stored, including but not limited to: product design data, scene design, product planning, enterprise standardization data, product function list, etc. Such data may be obtained from product specifications, customer service orders, after-market orders, marketing orders, or e-commerce evaluations.
The hierarchical knowledge graph 062 is a knowledge graph subjected to deep clustering and grading treatment, and a knowledge structure meeting relevant requirements can be constructed. As shown in fig. 5, the knowledge base 061 of the smart home domain provides data support for the primary knowledge-graph such that the hierarchical knowledge-graph 062 obtained from the primary knowledge-graph includes multi-hierarchy attributes of the smart home domain. The knowledge in the hierarchical knowledge graph 062 is divided into a plurality of different hierarchies according to the degree of correlation and the importance level, so that the scene generation model is convenient to understand the use mode of each intelligent device and the relation between the intelligent devices. The hierarchical knowledge graph can be used as an effective input for training a lightweight model and provides a necessary knowledge system for the model. By constructing the hierarchical knowledge graph 062, new dimensions can be added on the basis of the original full-scale knowledge graph, so that the knowledge graph can better serve a specific business scene, and the value of the knowledge graph in practical application is improved.
The scene vector library 063 is composed of a large number of scene vectors, the scene vector library 063 comprises a large amount of data for scene generation, and the first scene generation model is trained through the scene vector library 063 so that the first scene generation model can combine the large amount of data for scene generation. And finally, enabling the generated scene to better meet the requirements of users.
The reinforcement learning model 07 is used to update the scene vector library, and the reinforcement learning model 07 can expand the scene vectors in the scene vector library 063. The reinforcement learning model 07 may include one or more of a condition generation countermeasure network, a markov decision process, or a Q value learning algorithm. In case of optimization of the countermeasure network by conditional generation, the authenticity of the scene vectors in the scene vector library 063 can also be tested, thereby further optimizing the scene vector library 063.
The control hub device 02 is further provided with a user feedback module 08 configured to collect and process feedback information of the user to update the first scene generation model.
As shown in fig. 2, the system is capable of training a first scenario generation model in conjunction with a knowledge base 061 of the smart home domain. In the process of generating a scene through the first scene generation model, various data such as user data, device data, environment data, space data and the like are combined. And, the first scene generation model can be updated with feedback from the user. And finally, enabling the generated scene to better meet the requirements of users.
The first scene generation model can control equipment in the smart home according to the requirements of the user, and generate a scene corresponding to the requirements of the user. The user interaction module 03 is configured to obtain a requirement of a user, and input the requirement of the user into the scene generating model. The knowledge base 061, the hierarchical knowledge graph 062, the scene vector library 063 and the reinforcement learning model 07 of the smart home domain are all used to train the first scene generation model.
In conjunction with the hardware environment shown in fig. 1 and the software system shown in fig. 2, a method for scene generation is provided in an embodiment of the present disclosure. The method is as shown in fig. 3, and comprises the following steps:
S001, the control center equipment acquires the requirements of the user.
S002, the control center equipment inputs the requirements of the user into the first scene generation model to obtain a first scene scheme. Wherein the training data type of the first scene generation model includes user data, device data, environment data, and spatial data.
S003, the control center device controls the target device according to the first scene scheme.
In the embodiment of the disclosure, the control center device generates a first scene scheme through the first scene generation model, and controls the target device according to the first scene scheme. The target device executes the first scenario scheme, and generates a scenario corresponding to the user's needs. In the related art, a scene is generated according to a control instruction and a scene classification template, which may cause that the generated scene may have a larger difference from the requirement of a user, and the requirement of the user is difficult to reach. The embodiment of the disclosure trains the first scene generating model by utilizing various data such as users, equipment, environment and space, so that the trained first scene generating model can generate scenes by combining a plurality of factors such as users, equipment, environment and space. In this way, the generated scene is closer to the requirements of the user, so that the user experience is improved.
Optionally, after controlling the target device according to the first scenario solution, the method further includes: scene data is acquired. And updating the first scene scheme according to the scene data to obtain a second scene scheme. And controlling the target device according to the second scene scheme.
As shown in connection with fig. 4, another method for generating a scene provided by an embodiment of the disclosure includes:
s101, the control center equipment acquires the requirements of the user.
S102, the control center equipment inputs the requirements of the user into a first scene generation model to obtain a first scene scheme. Wherein the training data type of the first scene generation model includes user data, device data, environment data, and spatial data.
S103, the control center device controls the target device according to the first scene scheme, so that the target device generates a scene.
S104, the control center equipment acquires scene data.
S105, the control center equipment updates the first scene scheme according to the scene data to obtain a second scene scheme.
S106, controlling the target equipment according to the second scene scheme.
The second scene scheme comprises changing data of the target equipment, the control center equipment can acquire the target equipment to be controlled, and the running state of the target equipment is changed according to the changing data of the second scene scheme so as to meet the requirements of users.
In the embodiment of the disclosure, the current scene state can be obtained, the execution effect of the current scheme and the change condition of the environment can be judged, and the control center equipment can update the first scene scheme according to the judgment result, so that the second scene scheme meets the requirements of users more and the user experience is improved. In addition, the updating process is completed by the control center equipment, the requirement is not required to be input again by a user, the operation of the user is simplified, and the scene generating process is more intelligent.
Optionally, the scene data includes a temperature after the scene is executed, a humidity after the scene is executed, and an operating state of the device after the scene is executed.
Through monitoring the temperature humidity and the equipment running state of the room, the execution effect of the scene can be acquired more accurately, the control center equipment can update the first scene scheme according to the execution effect of the scheme, the second scene scheme is more in line with the requirements of users, and the user experience is improved.
Optionally, before obtaining the requirement of the user, the method further includes: and obtaining a scene vector library. And training a first scene generating model according to the scene vector library. Wherein the first scene generation model comprises part or all of the plurality of lightweight models.
The lightweight model is obtained by grading the knowledge by using a knowledge tower group theory and training accordingly. The knowledge tower group theory is that the knowledge in the intelligent family field is classified according to the relevance and importance thereof, and is organized into a knowledge system with clear level and compact structure. Then, according to different business demands, knowledge of a proper level can be selected for model training, so that more accurate and efficient service is realized. By training a lightweight model, on the one hand, the model volume can be reduced, and the consumption of computing resources and response time can be reduced. On the other hand, by selecting an appropriate model, more accurate and efficient service can be realized.
In the embodiment of the disclosure, the scene vector library is composed of a large number of scene vectors, the scene vector library comprises a large number of data for generating scenes, and the first scene generation model is trained through the scene vector library, so that the first scene generation model can combine the large number of data for generating scenes. And finally, enabling the generated scene to better meet the requirements of users.
Optionally, obtaining the scene vector library includes: and constructing a hierarchical knowledge graph, wherein the knowledge in the hierarchical knowledge graph is divided into a plurality of different hierarchies according to the degree of correlation and the importance level. And acquiring a plurality of first scene vectors according to the hierarchical knowledge graph. Training a plurality of first scene vectors to obtain a scene vector library.
In the embodiment of the disclosure, the knowledge graph can understand and describe the entities and the relations thereof in the intelligent home environment. The method can carry out visualization processing on elements such as various devices, users and the like and relations among the elements, and is convenient for obtaining scene vectors. By training the first scene vector, a scene vector library can be obtained and used for creating a first scene generation model, so that the generated scene is more in line with the requirements of users, and the use experience of the users is improved.
The scene vector can embody a scene mode which can be generated by the scene generating model, for example, a vector of 'Xiaoming', a vector of 'household appliances' (intelligent sound box and air conditioner), a vector of 'evening' and a vector of 'living room' are combined to form a scene vector comprehensively considering all elements. Thus, the smart home scene can be described as "the smart home scene is used for watching television news at night by adjusting the air conditioning temperature to 24 degrees through the smart speakers in the living room".
Optionally, the first scene vector is trained through a GCN (Graph Convolutional Networks, graph rolling network).
The graph rolling network GCN is capable of processing node and edge information while maintaining the graph structure, and can be used for node classification and link prediction. According to the embodiment of the disclosure, all the first scene vectors are regarded as nodes in the graph, the association relationship among the scenes is regarded as edges, and then the graph rolling network GCN is used for training, so that the first scene vectors can better capture the complex interaction relationship among the elements. Therefore, the scene vector library formed by the trained first scene vectors is more real, the generated scene can be more in line with the requirements of the user, and the use experience of the user is improved.
Optionally, acquiring the plurality of first scene vectors according to the hierarchical knowledge-graph includes: user data, environment data, and spatial data are acquired. And carrying out vectorization processing on the user data, the environment data, the space data and the equipment data in the hierarchical knowledge graph to obtain a plurality of groups of vector data. And extracting single vector data in each group of vector data and combining the single vector data to obtain a plurality of first scene vectors.
In the embodiment of the disclosure, the user data, the device data, the environment data and the space data can be combined with each other, and the first scene generation model is used for training, so that the generated scene meets the requirements of the user, and the use experience of the user is improved. Extracting single vector data in each group of vector data, carrying out vectorization processing on each item of data and combining the data into a first scene vector, and obtaining a scene vector library so as to facilitate creation of a first scene generation model, further enabling the generated scene to better meet the requirements of users and improving the use experience of the users.
Optionally, constructing a hierarchical knowledge graph includes: and acquiring a knowledge base in the intelligent family field. And constructing a primary knowledge graph according to the knowledge base. Training the primary knowledge graph to obtain a hierarchical knowledge graph.
In the embodiment of the disclosure, the knowledge graph not only can capture the multi-element information of the household electrical appliance, but also can integrate the information to form deep association, thereby providing valuable decision support for links such as design, production, sales and the like of the household electrical appliance. According to the embodiment of the disclosure, the knowledge graph is constructed according to the knowledge base in the intelligent home field, and the knowledge graph is used for training the first scene generation model, so that the first scene generation model can contain the multiple information in the intelligent home field, intelligent home appliances can be better utilized in the generation scene, the generated scene is further more in accordance with the requirements of users, and the use experience of the users is improved.
Further, the hierarchical knowledge graph obtained according to the primary knowledge graph contains multi-level attributes of the intelligent family field, so that the scene generation model is convenient to understand the use mode of each intelligent device and the relation between the intelligent devices, and further the intelligent devices can be better utilized to generate scenes which meet the requirements of users.
Optionally, constructing a primary knowledge graph according to the knowledge base, including: carrying out entity identification and relation establishment on the structured data in the knowledge base; carrying out entity identification and relation extraction on unstructured data lines in a knowledge base; and constructing a primary knowledge graph according to the entity and the relation.
In the embodiment of the disclosure, the structured data is data in a fixed format, for example, data in an Excel format. Some tools, such as the pandas library of Python, may be used to read Excel files and load the data into memory. In the data presented in Excel format, each column may represent an entity type and each row may represent a particular set of entities and relationships between entities. Thus, the primary knowledge-graph can be constructed by traversing the Excel file.
The format of unstructured data is not fixed, e.g., product introduction in the product specification. Entities and relationships may be obtained through natural language processing techniques. And then the obtained entity and relationship are used for constructing a primary knowledge graph.
Optionally, constructing a primary knowledge graph according to the entity and the relationship, including: and according to the entity and the relationship, carrying out entity link and relationship fusion to obtain a primary knowledge graph.
In the disclosed embodiment, entity linking is an important task in the field of natural language processing. The goal of entity linking is to link entities mentioned in the text, such as person names, place names, organization names, etc., with corresponding entities in the knowledge base. The semantics of sentences can be better understood through entity linkage, and text mining and analysis work can be performed on a richer and comprehensive knowledge base.
Optionally, before entity identification and relation extraction are performed on unstructured data in the knowledge base, the method further comprises: text cleansing is performed on unstructured data.
Text cleansing can delete unwanted characters, format text, and misspellings. And thus remove invalid information in the unstructured data. And further, accuracy and efficiency of entity identification and relation extraction are improved.
Optionally, before entity identification and relation extraction are performed on unstructured data in the knowledge base, the method further comprises: and word segmentation is carried out on unstructured data.
The segmentation may segment the continuous text into individual words or tokens. And further, accuracy and efficiency of entity identification and relation extraction are improved.
Optionally, before entity identification and relation extraction are performed on unstructured data in the knowledge base, the method further comprises: and performing stop word removal on unstructured data.
Stop words are words that occur frequently in text but contribute less to the meaning of the text, such as "have", etc. The stop word removal is to remove words that contribute less to the meaning of the text to reduce noise. And further, accuracy and efficiency of entity identification and relation extraction are improved.
Optionally, before entity identification and relation extraction are performed on unstructured data in the knowledge base, the method further comprises: and extracting word stems of unstructured data.
Stem extraction is used to unify words of different forms into the same form, thereby making the text representation more compact and consistent. The method is beneficial to improving the performance of tasks such as text classification, information retrieval and the like, and further improving the accuracy and efficiency of entity identification and relation extraction.
Optionally, the primary knowledge-graph is trained by the GCN.
In the primary knowledge graph, each home appliance and its various attributes are represented as a node in the graph, and the relationship between two nodes is represented as an edge connecting the two nodes. Each node and edge has a corresponding characteristic. For example, node characteristics may include device specifications and device functions, and edge characteristics may represent the type of relationship between two nodes. The GCN can process information on nodes and edges while maintaining the graph structure, and can be used for node classification and link prediction. According to the embodiment of the disclosure, the GCN can effectively capture and understand complex topological structures and node characteristics in the atlas, and the primary knowledge atlas can be optimized through GCN learning. For example, new nodes and edges are added, and the characteristics of the nodes and edges are updated. In this way, a hierarchical knowledge graph having multi-level attributes of the smart home domain can be obtained.
Optionally, training the primary knowledge-graph includes: and clustering the primary knowledge graph. And grading the clustered primary knowledge graph.
In the embodiment of the disclosure, the primary knowledge graphs are clustered, so that the related knowledge entities can be gathered together, and the control center equipment can find all information related to a certain theme or concept more easily. For the primary knowledge graph, clustering can hide the complexity of the underlying structure, thereby providing a more concise and intuitive knowledge view.
The clustered knowledge graph is classified, and each cluster is ordered or classified mainly according to the importance, the relevance or the hierarchical structure of the knowledge entity. By grading, a layering view from macroscopic to microscopic can be provided, and the structure of the knowledge graph can be better understood. Important entities or relationships can be highlighted by ranking, providing more valuable information. And the information or sub-fields required can be rapidly positioned according to the grading result, so that the information retrieval efficiency is improved. Further, the ranking may be updated according to time or other dynamic factors to reflect changes in the knowledge-graph. By grading, the structure and the content of the knowledge graph can be more easily explained, and the understandability of knowledge is improved.
Optionally, clustering the primary knowledge-graph includes: and carrying out relation prediction through a graph neural network. The predicted relationships are added to the primary knowledge-graph.
A graph neural network is a neural network that is dedicated to processing graph structure data. Graph convolution is a basic operation in a graph neural network to capture relationships between nodes. By updating node characteristics in the graph volume lamination, information of neighbor nodes can be aggregated, so that richer node representations are obtained. After the graph rolling, each node contains much information about its neighbors. Relationships between nodes may be predicted using prediction tasks. Common prediction tasks include link prediction between nodes, node attribute prediction, and the like. In the embodiment of the disclosure, the relationship among the nodes in the primary knowledge graph can be predicted through the prediction task, and the predicted relationship is added into the primary knowledge graph, so that the node representation in the primary knowledge graph is more abundant, and the primary knowledge graph is further conveniently layered, so that the hierarchical knowledge graph is obtained.
Optionally, ranking the clustered primary knowledge-graph includes ranking according to a learning algorithm.
In the embodiments of the present disclosure, the learning algorithm includes a machine learning algorithm and a deep learning algorithm. The machine learning algorithm can be directly classified according to the similarity among knowledge points, and the classification efficiency is higher. The deep learning algorithm can learn the nonlinear relation among the knowledge points, so that the grading result is more accurate, and the hierarchical knowledge graph which meets the requirements of users better is obtained.
Optionally, grading according to a learning algorithm includes: extracting features of data in the primary knowledge graph to obtain feature vectors; according to the feature vector, acquiring a grading strategy corresponding to the feature vector; and grading the primary knowledge graph according to a grading strategy.
In the embodiment of the disclosure, feature extraction is performed on data in the primary knowledge-graph, wherein feature vectors can be extracted from nodes in the primary knowledge-graph. Based on the extracted feature vectors, a classification strategy may be formulated. Such as ranking based on the distance between feature vectors of the nodes or the similarity of feature vectors. In this way, the knowledge in the primary knowledge graph can be divided into different levels, and the attributes corresponding to the levels are obtained.
Optionally, ranking the clustered primary knowledge-graph includes ranking by a classifier model.
The classifier model can quickly classify new data, so that the classifier model is widely applied to scenes needing real-time response. And, the implementation and maintenance costs of the classifier model are relatively low compared to learning algorithms.
Optionally, after grading the clustered primary knowledge graph, the method further includes: and evaluating the grading result, and adjusting the grading strategy according to the evaluation result.
In the embodiment of the disclosure, due to the relative disorder of the data in the primary knowledge graph, the classification policy formulated for the first time may cause errors in classification, thereby affecting the classification result. By evaluating the grading result, the grading strategy can be timely adjusted, so that the knowledge in the primary knowledge graph can be more accurate when being graded, and the accuracy of the hierarchical knowledge graph is further improved.
Optionally, after controlling the target device according to the first scenario solution, the method further includes: and obtaining the satisfaction degree of the user on the generated scene. And updating the first scene generation model according to the satisfaction degree of the user.
As shown in connection with fig. 6, another method for generating a scene provided by an embodiment of the disclosure includes:
s201, the control center device obtains the requirements of the user.
S202, the control center equipment inputs the requirements of the user into a first scene generation model to obtain a first scene scheme. Wherein the training data type of the first scene generation model includes user data, device data, environment data, and spatial data.
S203, the control center device controls the target device according to the first scene scheme, so that the target device generates a scene.
S204, the control center device obtains satisfaction degree of the user on the generated scene.
S205, the control center equipment updates the first scene generation model according to the satisfaction degree of the user.
S206, the control center device acquires scene data.
S207, the control center device updates the first scene scheme according to the scene data to obtain a second scene scheme.
S208, the control center device controls the target device according to the second scene scheme.
In the embodiment of the disclosure, user feedback can be collected and processed, and the satisfaction degree of the user on the generated scene can be obtained through the feedback information of the user. Through deep understanding and analysis of user satisfaction, the first scene generation model can be timely and accurately corrected and optimized, so that user experience is improved.
Optionally, obtaining satisfaction of the user with the generated scene includes: and monitoring the behavior of the user in the generated scene, and obtaining the satisfaction degree of the user.
Optionally, obtaining satisfaction of the user with the generated scene includes: and collecting feedback of the user to obtain satisfaction of the user. For example, user-implicit feedback may be collected. For example, users talk about smart home experiences on social media, or share use experiences with friends, family. These implicit feedback can represent user satisfaction with the scene.
Optionally, obtaining satisfaction of the user with the generated scene includes: and obtaining the satisfaction degree of the user through the A/B test. Illustratively, the satisfaction of the user is tested by creating two or more versions of the control scenario. For example, different control strategies may be tested for the same device, with the test being more popular with users.
Optionally, obtaining satisfaction of the user with the generated scene includes: and obtaining the satisfaction degree of the user according to the prediction model. Based on the user's historical behavior, personal characteristics, and other relevant factors, the user's reaction to a particular scenario may be predicted using machine learning or statistical models. And the satisfaction degree of the user to the scene can be deduced according to the reaction of the user.
Optionally, obtaining satisfaction of the user with the generated scene includes: and obtaining the satisfaction degree of the user according to the user loss rate and the return visit rate. The churn rate and revisit rate of the user can be counted, and if the user is not satisfied with the provided scene, the user can stop using the system, so that the churn rate is increased. If the user is very satisfied, the user may often return, resulting in an increase in return visit rate. Monitoring churn rate and return visit rate can help to learn user satisfaction to obtain user satisfaction.
Optionally, updating the first scene generation model according to the satisfaction of the user includes: and evaluating the first scene scheme according to the satisfaction degree of the user to obtain an evaluation result. Wherein the evaluation result includes a positive evaluation or a negative evaluation. And inputting the evaluation result into a knowledge base in the intelligent family field to update the first scene generation model.
Optionally, inputting the evaluation result into a knowledge base in the smart home domain, and updating the first scene generation model includes: and according to the evaluation result, expanding the knowledge of the knowledge base in the intelligent family field to obtain an updated knowledge base. And training the first scene generating model according to the updated knowledge base.
In the embodiment of the disclosure, the first scene generation model can be optimized. Inputting the evaluation results into a knowledge base in the smart home domain, where structured and unstructured data are stored, including but not limited to: product design data, scene design, product planning, enterprise standardization data, product function list, etc. The evaluation result can be stored as new data into a knowledge base in the smart home field. And then, a knowledge graph is constructed through a knowledge base with the evaluation result stored, and the knowledge graph is used for training a first scene generation model, and further training the first scene generation model, so that the scene generated by the first scene generation model meets the requirements of users.
Optionally, updating the first scene generation model according to the satisfaction of the user includes: and evaluating the first scene scheme according to the satisfaction degree of the user to obtain an evaluation result. Inputting the evaluation result into the large language model to obtain correction data, and updating the first scene generation model by using the correction data.
In the embodiment of the disclosure, the large language model is a deep learning model and is trained based on massive text data. Natural language text can be generated, meaning of the language text can be understood, and various natural language tasks such as text abstracts, questions and answers, translation and the like can be processed. Large language models are characterized by large scale, contain billions of parameters, and are capable of learning complex patterns in language data. The large language model has the advantages of naturalness, generalization capability, high efficiency, strong task processing capability, high-quality text content generation, strong dialogue system establishment capability, reduced dependence on domain data, promotion of development of the artificial intelligence field and the like. They are able to understand and generate natural language text, accommodating a variety of different languages and contexts.
The evaluation result is input to the large language model, and correction data corresponding to the positive evaluation or the negative evaluation obtained by the evaluation model can be output through the large language model. And applying the correction data to the scene vector library to update the scene rule, and obtaining an updated scene vector library. And training a first scene generating model through the updated scene vector library. The scene generated by the first scene generation model can be more in line with the requirements of users.
Optionally, updating the first scene generation model with the correction data includes: applying the correction data to a scene vector library to update scene rules and obtain an updated scene vector library; and training through the updated scene vector library to obtain a second scene generation model.
According to the embodiment of the disclosure, according to the correction data, the scene rules in the scene vector library can be updated, so that the scene vectors in the scene vector library are more real, and further, the requirements of users are met more. The scene vector library comprises a large amount of data for scene generation, and the second scene generation model is obtained through training of the scene vector library, so that the second scene generation model can be combined with the large amount of data for scene generation. And finally, enabling the generated scene to better meet the requirements of users.
Optionally, training through the updated scene vector library to obtain a second scene generation model, including: and creating a training set according to the updated scene vector library. Wherein the training set includes scenes that match the updated scene vector library. A second scene generation model is obtained using training set training.
In the embodiment of the disclosure, the training set includes a scene conforming to the updated scene vector library, and training is performed through the training set, so that the second scene generating model obtained through training can be adjusted based on the updated scene vector library, and further, the scene generated by the scene generating model is more in accordance with the requirement of the user through feedback of the user.
Optionally, training using the training set to obtain a second scene generation model includes: and training the basic model by using the training set to obtain a second scene generating model.
The second scene generation model is a scene generation model which is obtained by training after integrating user feedback data on the basis of the basic model. Compared with the first scene generation model, the method has more reference data, and a more real scene scheme can be generated according to the requirements of the user, so that the requirements of the user are met.
In an embodiment of the disclosure, the base model includes a neural network-based language model, a recurrent neural network language model, a long-short-term memory network language model, or a gated loop unit language model. These models have wide application in the field of natural language processing, such as speech recognition, machine translation, text classification, and the like.
According to the embodiment of the disclosure, the basic model is trained through the training set, the obtained second scene generation model is integrated with feedback data of the user, and the generated scene meets the requirements of the user.
Optionally, training using the training set to obtain a second scene generation model includes: and adjusting the first scene generating model by using the training set to obtain a second scene generating model.
According to the embodiment of the disclosure, the training set is used as new training data, and the user data, the device data, the environment data and the space data are combined, so that the training set, the user data, the device data, the environment data and the space data can be simultaneously utilized when the first scene generation model is trained, and then parameters in the first scene generation model can be adjusted to obtain the second scene generation model.
Optionally, performing positive evaluation or negative evaluation on the generated scene according to the satisfaction degree of the user, including: and according to the satisfaction degree of the user, evaluating the scene corresponding to the satisfaction degree by using an evaluation model to obtain the evaluation score of the scene. And in the case that the evaluation score is higher than or equal to the score threshold value, determining to perform forward evaluation on the generated scene. Or in the case that the evaluation score is lower than the score threshold, determining to perform negative evaluation on the generated scene.
In the embodiment of the disclosure, the scenes corresponding to the satisfaction degree need to be evaluated by using the evaluation model, the evaluation model can score the current scenes according to the satisfaction degree of the user, and the scenes are given an evaluation score. The rating score can represent how well the current scene meets the user's needs. The higher the degree of compliance, the higher the user satisfaction.
And sorting the evaluation scores according to the historical evaluation data by the evaluation model, and selecting a high-score scene and a low-score scene. The evaluation model can determine a score threshold based on the high score scene and the low score scene. And in the case that the evaluation score is higher than or equal to the score threshold value, determining to perform forward evaluation on the generated scene. Or in the case that the evaluation score is lower than the score threshold, determining to perform negative evaluation on the generated scene. In this way, by comparing the evaluation score with the score threshold, the generated scene can be evaluated positively or negatively.
In the embodiment of the disclosure, the control center device can acquire feedback information of the user, and the feedback information can embody the feeling of the user and the execution effect of the first scene scheme. And evaluating the feedback information, so that the control center equipment can understand the feeling of the user and the execution effect of the first scene scheme conveniently. The evaluation result is input into a knowledge base in the intelligent family field, the evaluation result can be used as a training material to construct a knowledge graph, the knowledge graph is used for training a first scene generation model, and the first scene generation model is further trained, so that the scene generated by the first scene generation model meets the requirements of users.
Optionally, before evaluating the scene corresponding to the satisfaction degree by using the evaluation model according to the satisfaction degree of the user, the method further includes: selecting a machine learning model or a statistical model; training the selected model according to the historical user feedback data to obtain an evaluation model.
Optionally, the machine learning model includes a decision tree, random forest, support vector machine, neural network, or clustering algorithm.
Alternatively, the statistical model comprises linear regression, logistic regression, analysis of variance, chi-square test, or survival analysis.
In the embodiment of the disclosure, training of the evaluation model is particularly important, and positive evaluation or negative evaluation can be performed on the generated scene through the evaluation model. Firstly, a proper model to be trained is selected, and under the condition of selecting a machine learning model, the machine learning model is better in performance when processing large-scale and high-dimensional data, can mine complex relations in the data, and accurately predicts unknown data. The machine learning model can continuously optimize its own parameters through an adaptive learning algorithm to better adapt to the change of data. In the case of selecting a statistical model, the statistical model can give an estimate of the parameter and its degree of influence on the target variable, making the result easier to interpret and understand. And the statistical model is more effective in processing data with obvious statistical rules, and the result is relatively stable. Secondly, the control center equipment can acquire historical user feedback data, and training the selected model according to the historical user feedback data. Therefore, the trained evaluation model can learn the habit of the historical user, and further, a more accurate evaluation result is obtained.
In practical applications, a suitable machine learning model or statistical model may be selected according to circumstances.
Optionally, training a plurality of first scene vectors to obtain a scene vector library, including: training a plurality of first scene vectors to obtain an initial scene vector library. And training the initial scene vector library by using the condition generation countermeasure network to obtain a second scene vector. And adding the second scene vector to the scene vector library to obtain the scene vector library.
The condition generating countermeasure network is a generation model in which condition data is combined with inputs of a generator and a arbiter to control the generated output. This enables the model to generate samples with constraints according to specific conditions, e.g. to generate images of specific categories according to category labels.
The condition generating countermeasure network consists of a generator and a discriminator. The task of the generator is to generate realistic samples from given random inputs and condition data. The task of the arbiter is to distinguish between the real samples and the samples generated by the generator and to try to improve their classification accuracy. In the training process, the generator and the discriminator can perform resistance training, and the respective parameters are continuously optimized to realize better generation and discrimination performance.
In the disclosed embodiments, the scene vector library may be trained by a conditional generation countermeasure network. The condition generation countermeasure network includes a generator and a discriminator. The generator can generate a second scene vector from the input scene vector, and the arbiter can determine whether the generated scene vector is authentic. During training, the generator generates scene vectors, and the arbiter distinguishes between real data and generated data. With such an antagonistic training, the generator will eventually be able to generate a second scene vector that is more closely related to the real data, obtaining a scene generation model. Therefore, the scene scheme generated by the scene generation model is more real, and the generated scene meets the requirements of users.
Optionally, updating the first scene generating model according to the satisfaction of the user, further includes: evaluating the first scene scheme according to the satisfaction degree of the user to obtain an evaluation result; inputting the evaluation result into a condition generation countermeasure network, and updating the first scene generation model.
In the embodiment of the disclosure, the condition generating countermeasure network can be used for training the scene vector library, and the evaluation result is input into the condition generating countermeasure network, so that the data which can be utilized by the condition generating network are more abundant, and further in the process of generating the scene vector by the generator, the input evaluation result can be utilized to generate a more real scene vector. Therefore, continuous training of the countermeasure network is generated through the conditions, so that scenes in the scene vector library are more real, and the requirements of users can be met. Optionally, obtaining the requirement of the user includes: request data of a user is received. Background information is added to the request data. Wherein the context information includes user data, device data, environment data, and spatial data. And processing the request data added with the background information to obtain the fused request data. And transmitting the fused request data to a classifier. Wherein the classifier is used for classifying the request of the user.
In the embodiment of the disclosure, by adding the background information in the request data, the request data of the user contains more effective information related to the scene, so that the classifier is convenient to understand and classify. The user requests are classified by a deep learning model as compared to the related art. According to the embodiment of the disclosure, after the background information is added to the request data, the data is subjected to subsequent processing, so that the processed data is easier to be understood by the classifier, and the classification result of the classifier is more accurate. Finally, the accuracy of the classification model is improved.
Optionally, the classifier classifies the request of the user, including: acquiring a mapping relation between history input data and history output labels; and predicting the request of the user according to the mapping relation to obtain a prediction result.
In the embodiment of the disclosure, the basic working principle of the classifier is to achieve the purpose of predicting actual input data by learning and simulating the mapping relation between the input data and the output label, and further obtain a classification result of a request of a user according to the predicted output label.
Optionally, obtaining the mapping relationship between the history input data and the history output tag includes: acquiring historical input data and a historical output tag corresponding to the historical input data; and learning the history input data and the history output label to obtain a mapping relation.
In the embodiment of the disclosure, the classifier learns through the characteristics of the history input data, and can find the mapping relation between the history input data and the history output label. Specifically, if a deep learning model RNN or LSTM is selected, this process mainly adjusts the weight parameters in the network by back propagation and gradient descent methods, etc. for training.
Optionally, adding context information in the request data includes: background information is collected. Wherein the context information includes user data, device data, environment data, and spatial data. The request data and the context information are combined.
In the embodiment of the disclosure, the control center device can acquire the background information of the current smart home after the user sends out the request data. The background information contains a lot of information related to the scene. By adding such information to the request data, the content of the request data can be made richer, and more effective information can be contained for the classifier to understand and classify. The request data and the background information can be bound by combining the request data and the background information, the request data and the background information can be transmitted simultaneously in the data transmission process, and the request data and the background information can be understood and classified simultaneously when the classifier is used for understanding and classifying. By comprehensively understanding the request data and the background information, the classification result of the classifier is more accurate. Finally, the accuracy of the classification model is improved.
Optionally, performing domain vectorization processing on the request data added with the background information, including: forming conversion content according to the background information; inserting the converted content into the request data of the user to obtain comprehensive data; the integrated data is converted into a fixed length vector.
The field vectorization processing is to vectorize the data in the household appliance field. Unstructured data can be converted into a structured vector representation by a domain vectorization process. Thus, the performance and accuracy of the model can be improved. Through field vectorization processing, the intrinsic rules and modes of the data can be deeply mined, and the intrinsic structure and characteristics of the data can be better understood, so that the result of the model can be better explained. In addition, the results of domain vectorization processing can be shared among different tasks and domains, thereby improving the reusability of the model. The field vectorization processing is carried out on the request data added with the background information, so that the processed data can be more easily understood by the classifier, and the classification result of the classifier is more accurate. Finally, the accuracy of the classification model is improved.
In the embodiment of the disclosure, the background information has various forms, and the direct insertion of the background information into the request data of the user may lead to the disorder of the inserted data form, which is not beneficial to the understanding and classification of the classifier. The conversion content obtained by the conversion based on the background information is more easily inserted into the request data of the user. And the comprehensive data is converted into a vector with a fixed length, so that the form of the comprehensive data can be unified, and the comprehensive data is input into the classifier, so that the classifier can understand and classify the comprehensive data conveniently, and the classification result of the classifier is more accurate. Finally, the accuracy of the classification model is improved.
Optionally, forming the conversion content according to the background information includes: acquiring a text in the background information according to a natural language processing algorithm; the text is converted into a numerical vector.
In the disclosed embodiments, text needs to be converted into numeric vectors for processing by a machine learning model or a deep learning model. Text conversion is typically aided by Word embedding techniques such as Word2Vec (Word to Vector), gloVe (Global Vectors for Word Representation, global Vector of Word representation) or BERT (Bidirectional Encoder Representations from Transformers, transform-based bi-directional encoder representation) models, and the like. BERT is a pre-trained model that can be fine-tuned to classify user intent. Word embedding is a method commonly used in natural language processing, mainly converting text data into numerical vectors for use by machine learning or deep learning models. Word2Vec is a common Word embedding method that can train a continuous vector space model in which semantically similar words are also located very close in space. The two modes are also classified into CBOW (Continuous Bag of Words, continuous word bag model) and Skip-gram (Skip-gram model). CBOW predicts the context around a word, while Skip-gram predicts the context around from a word. GloVe is another word embedding method, training on a global corpus. GloVe are directed to minimizing reconstruction errors of dense matrices. BERT employs a bi-directional transducer encoder. Unlike most previous pre-trained language models that can only be pre-trained using either left or right contexts, BERT allows the model to be encoded with both left and right contexts by introducing a masking language model objective function.
Each method is suitable for different scenarios, e.g. Word2Vec is suitable when there is a lot of training data and computational resources are limited. GloVe can fully mine global statistics. BERT achieves the best results when dealing with diverse inputs.
Optionally, inserting the conversion content into the request data of the user includes: vectorizing the request data of the user to obtain an input vector of the user; and splicing the input vector of the user with the numerical vector in the background information.
The most basic and intuitive method of stitching the user's input vector with the numeric vector in the background information is to stitch the numeric vector in the background information directly to the tail or head of the user's input vector. For example, if the user input is a vector of length N and the background is a vector of length M, then a new vector of length n+m will be obtained after stitching.
Optionally, inserting the conversion content into the request data of the user includes: vectorizing the request data of the user to obtain an input vector of the user; the user's input vector is weighted summed with the numeric vector of the background information.
The input vector of the user and the numerical vector of the background information are weighted and summed, and the other method is to weight and sum the input vector of the user and the numerical vector of the background information according to a certain weight. This method requires defining a weight parameter that indicates how important the two pieces of information are to the final result. The weighted summation can keep the dimension of the combined vector unchanged.
Optionally, inserting the conversion content into the request data of the user includes: vectorizing the request data of the user to obtain an input vector of the user; the user's input vector is mixed with the numeric vector of the background information. The input vector of the user and the numerical vector of the background information are mixed, or the two vectors can be spliced together to form a longer vector, and then the longer vector is subjected to nonlinear transformation through one or more fully connected layers. The method can better fuse two parts of information, and can learn the optimal fusion method through training.
Optionally, after receiving the request data of the user, the method further includes: converting the request data; and complementing the converted request data.
When converting the request data, firstly, the original input submitted by the user through the interactive device is obtained. The user may submit the input in different ways, including but not limited to, voice commands, text commands, or user behavior. For example, the user may speak to the voice assistant: "turn on the living room air conditioner", or "turn on the air conditioner" with text input on the cell phone. After the interaction device receives the inputs, the control center device performs first processing. For example, if a user submits input using a speech recognition system, the received voice command may be converted to text form. Therefore, after conversion, the original input of the user in different forms can be converted into the same form, so that the original input of the user can be conveniently complemented and corrected.
The user may miss some important information for a number of reasons while interacting. For example, the user may simply say "turn on the air conditioner" without explicitly indicating which room is the air conditioner. To address such issues, the system needs to have the ability to complement some of the missing information. The system may infer and complement missing information according to preset rules. For example, if the air conditioner of the living room is controlled by default when no room is specified according to a certain rule, the system automatically interprets it as "air conditioner of the living room is turned on" when the user says "air conditioner is turned on". If there is no preset rule or the preset rule cannot cope with all cases, it is necessary to infer missing information using historical data and context information. For example, the system may determine which room air conditioner should be operated based on the user's past usage records, current device status, environmental conditions, and the like.
Optionally, after receiving the request data of the user, the method further includes: converting the request data; and correcting the converted request data.
The user may provide erroneous or ambiguous instructions requiring correction of the converted request data. For example, the user may sound unclear, resulting in a speech recognition result of "air conditioning on", and in fact, the user desires to "turn on the water heater". Many modern interactive systems are equipped with powerful language models and error detection algorithms that automatically detect and correct such errors.
As shown in connection with fig. 7, in some embodiments, an apparatus 60 for scene generation includes: the user interaction module 03 is configured to acquire the requirements of the user and input the requirements of the user into the first scene generation model; a model application module 04 comprising a first scenario generation model configured to output a first scenario solution upon receiving a user's demand; the training data type of the first scene generation model comprises user data, equipment data, environment data and space data; a control module 05 configured to control the target device according to the first scenario scheme; model training module 06 is configured to train a first scene generation model.
By adopting the device 60 for generating the scene, which is provided by the embodiment of the disclosure, the scene can be automatically generated through the scene generation model according to the requirement of the user, so that the scene generation process is more intelligent. In the related art, a scene is generated according to a control instruction and a scene classification template, and an applied scene generation model does not comprise data of a user, equipment, environment and space, so that the generated scene may have a larger difference from the requirement of the user, and the requirement of the user is difficult to reach. The embodiment of the disclosure trains the first scene generating model by utilizing various data such as user data, equipment data, environment data, space data and the like, so that the first scene generating model can better understand entities and relations among the entities in the home environment. Therefore, the first scene generation model can combine multiple factors such as a user, equipment, environment and space to generate a scene, and the generated scene is closer to the requirements of the user, so that the user experience is improved.
In some embodiments, a computer-readable storage medium includes a stored program, wherein the program when run performs the method for scene generation described above.
Embodiments of the present disclosure may be embodied in a software product stored on a storage medium, including one or more instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of a method of embodiments of the present disclosure. While the aforementioned storage medium may be a non-transitory storage medium, such as: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
As shown in connection with fig. 8, in some embodiments, the electronic device comprises a memory 701 and a processor 700, the memory 701 having stored therein a computer program, the processor 700 being arranged to perform the above-described method for scene generation by means of the computer program.
Optionally, the electronic device 70 may also include a communication interface (Communication Interface) 702 and a bus 703. The processor 700, the communication interface 702, and the memory 701 may communicate with each other through the bus 703. The communication interface 702 may be used for information transfer. The processor 700 may call logic instructions in the memory 701 to perform the method for scene generation of the above-described embodiments.
Further, the logic instructions in the memory 701 may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 701 is used as a computer readable storage medium for storing a software program, a computer executable program, and program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 700 executes the functional applications and data processing by running the program instructions/modules stored in the memory 701, i.e. implements the method for scene generation in the above-described embodiments.
The memory 701 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function. The storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 701 may include a high-speed random access memory, and may also include a nonvolatile memory.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (12)

1. A method for scene generation, comprising:
Constructing a hierarchical knowledge graph; the hierarchical knowledge graph is a knowledge graph subjected to deep clustering and grading treatment, and the knowledge in the hierarchical knowledge graph is divided into a plurality of different levels according to the correlation degree and the importance level;
acquiring a plurality of first scene vectors according to the hierarchical knowledge graph;
training a plurality of first scene vectors to obtain a scene vector library;
Training a first scene generation model according to the scene vector library; the training data type of the first scene generation model comprises user data, equipment data, environment data and space data;
Acquiring the requirement of a user;
Inputting the requirements of a user into a first scene generation model to obtain a first scene scheme;
the target device is controlled according to the first scenario scheme.
2. The method of claim 1, further comprising, after controlling the target device according to the first scenario solution:
acquiring scene data;
Updating the first scene scheme according to the scene data to obtain a second scene scheme;
And controlling the target device according to the second scene scheme.
3. The method of claim 1 or 2, wherein the first scene generation model comprises part or all of a plurality of lightweight models.
4. The method of claim 1, wherein obtaining a plurality of first scene vectors from the hierarchical knowledge-graph comprises:
acquiring user data, environment data and space data;
Vectorizing the user data, the environment data, the space data and the equipment data in the hierarchical knowledge graph to obtain a plurality of groups of vector data;
And extracting single vector data in each group of vector data and combining the single vector data to obtain a plurality of first scene vectors.
5. The method of claim 1, wherein constructing a hierarchical knowledge-graph comprises:
acquiring a knowledge base in the intelligent family field;
Constructing a primary knowledge graph according to the knowledge base;
training the primary knowledge graph to obtain a hierarchical knowledge graph.
6. The method of claim 5, wherein training the primary knowledge-graph comprises:
clustering the primary knowledge graph; and is combined with the other components of the water treatment device,
And grading the clustered primary knowledge graph.
7. The method of claim 5, further comprising, after controlling the target device according to the first scenario:
Acquiring satisfaction degree of a user on a first scene scheme;
and updating the first scene generation model according to the satisfaction degree of the user.
8. The method of claim 7, wherein updating the first scene generation model based on the satisfaction of the user comprises:
evaluating the first scene scheme according to the satisfaction degree of the user to obtain an evaluation result; wherein the evaluation result comprises positive evaluation or negative evaluation;
inputting the evaluation result into a knowledge base in the intelligent family field to update a first scene generation model; or inputting the evaluation result into a condition generation countermeasure network, and updating the first scene generation model.
9. The method of claim 1, wherein training a plurality of first scene vectors to obtain a scene vector library comprises:
training a plurality of first scene vectors to obtain an initial scene vector library;
training an initial scene vector library by using a condition generation countermeasure network to obtain a second scene vector;
and adding the second scene vector to the initial scene vector library to obtain a scene vector library.
10. An apparatus for scene generation, comprising:
The user interaction module is configured to acquire the requirements of a user and input the requirements of the user into the first scene generation model;
The model application module comprises a first scene generation model and is configured to output a first scene scheme under the condition that the requirement of a user is received;
a control module configured to control the target device according to a first scenario scheme;
the model training module is configured to train the first scene to generate a model, and comprises a hierarchical knowledge graph and a scene vector library, wherein the hierarchical knowledge graph is a knowledge graph subjected to deep clustering and grading treatment, and knowledge in the hierarchical knowledge graph is divided into a plurality of different levels according to the degree of correlation and the importance level; acquiring a plurality of first scene vectors according to the hierarchical knowledge graph; training a plurality of first scene vectors to obtain a scene vector library; training a first scene generation model according to the scene vector library; wherein the training data type of the first scene generation model includes user data, device data, environment data, and spatial data.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 9.
12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of claims 1 to 9 by means of the computer program.
CN202410167006.1A 2024-02-06 2024-02-06 Method and device for generating scene, storage medium and electronic device Active CN117706954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410167006.1A CN117706954B (en) 2024-02-06 2024-02-06 Method and device for generating scene, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410167006.1A CN117706954B (en) 2024-02-06 2024-02-06 Method and device for generating scene, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN117706954A CN117706954A (en) 2024-03-15
CN117706954B true CN117706954B (en) 2024-05-24

Family

ID=90144748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410167006.1A Active CN117706954B (en) 2024-02-06 2024-02-06 Method and device for generating scene, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117706954B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114331569A (en) * 2022-03-07 2022-04-12 广州鹰云信息科技有限公司 User consumption behavior analysis method and system for different scenes in business space
CN114698201A (en) * 2022-04-27 2022-07-01 杭州涂鸦信息技术有限公司 Illumination control method based on artificial illumination and related equipment
CN114740748A (en) * 2022-04-29 2022-07-12 青岛海尔科技有限公司 Smart home scene creation method and device, storage medium and electronic device
CN114821236A (en) * 2022-04-28 2022-07-29 青岛海尔科技有限公司 Smart home environment sensing method, system, storage medium and electronic device
CN115877726A (en) * 2022-10-26 2023-03-31 海尔优家智能科技(北京)有限公司 Control method of intelligent household equipment, computer equipment and storage medium
CN116774599A (en) * 2023-07-20 2023-09-19 珠海格力电器股份有限公司 Intelligent equipment control method based on knowledge graph, computer device and computer readable storage medium
WO2023179551A1 (en) * 2022-03-21 2023-09-28 青岛海尔电冰箱有限公司 Near-field communication-based dynamic display method for voice scenario interaction, and home appliance interaction system
CN116913274A (en) * 2023-06-30 2023-10-20 青岛海尔科技有限公司 Scene generation method, device and storage medium based on generation type large model
CN117492380A (en) * 2023-12-29 2024-02-02 珠海格力电器股份有限公司 Control method and control device of central control system of intelligent home

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481558B2 (en) * 2018-09-12 2022-10-25 Samsung Electroncis Co., Ltd. System and method for a scene builder
US11954436B2 (en) * 2021-07-26 2024-04-09 Freshworks Inc. Automatic extraction of situations

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114331569A (en) * 2022-03-07 2022-04-12 广州鹰云信息科技有限公司 User consumption behavior analysis method and system for different scenes in business space
WO2023179551A1 (en) * 2022-03-21 2023-09-28 青岛海尔电冰箱有限公司 Near-field communication-based dynamic display method for voice scenario interaction, and home appliance interaction system
CN114698201A (en) * 2022-04-27 2022-07-01 杭州涂鸦信息技术有限公司 Illumination control method based on artificial illumination and related equipment
CN114821236A (en) * 2022-04-28 2022-07-29 青岛海尔科技有限公司 Smart home environment sensing method, system, storage medium and electronic device
CN114740748A (en) * 2022-04-29 2022-07-12 青岛海尔科技有限公司 Smart home scene creation method and device, storage medium and electronic device
CN115877726A (en) * 2022-10-26 2023-03-31 海尔优家智能科技(北京)有限公司 Control method of intelligent household equipment, computer equipment and storage medium
CN116913274A (en) * 2023-06-30 2023-10-20 青岛海尔科技有限公司 Scene generation method, device and storage medium based on generation type large model
CN116774599A (en) * 2023-07-20 2023-09-19 珠海格力电器股份有限公司 Intelligent equipment control method based on knowledge graph, computer device and computer readable storage medium
CN117492380A (en) * 2023-12-29 2024-02-02 珠海格力电器股份有限公司 Control method and control device of central control system of intelligent home

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Yansheng Li ; Deyu Kong ; Yongjun Zhang ; 等.Representation Learning of Remote Sensing Knowledge Graph for Zero-Shot Remote Sensing Image Scene Classification.2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS.2021,全文. *
基于科研知识图谱的研究侧写生成方法研究与设计;李娇, 孙坦, 鲜国建, 等;数字图书馆论坛;20220731(第7期);全文 *
家庭智能空间下基于场景的人的行为理解;田国会;吉艳青;李晓磊;;智能系统学报;20100228;5(01);全文 *

Also Published As

Publication number Publication date
CN117706954A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN112799747A (en) Intelligent assistant evaluation and recommendation method, system, terminal and readable storage medium
US11902043B2 (en) Self-learning home system and framework for autonomous home operation
CN106845644A (en) A kind of heterogeneous network of the contact for learning user and Mobile solution by correlation
CN115826428A (en) Control method and device of household equipment, storage medium and electronic device
CN118051625A (en) Method and device for optimizing scene generation model, storage medium and electronic device
Wang et al. Learning performance prediction via convolutional GRU and explainable neural networks in e-learning environments
CN105652677A (en) Intelligent home control method, device and system based on user behavior analysis
WO2023168838A1 (en) Sentence text recognition method and apparatus, and storage medium and electronic apparatus
CN111694280A (en) Control system and control method for application scene
CN114896899A (en) Multi-agent distributed decision method and system based on information interaction
CN118332195B (en) Intelligent recommendation method, device and storage medium for personalized consumption data prediction
CN117762032B (en) Intelligent equipment control system and method based on scene adaptation and artificial intelligence
CN115168720A (en) Content interaction prediction method and related equipment
CN111046071A (en) Intelligent home decision method based on Markov logic network
Hoang et al. A novel time series prediction approach based on a hybridization of least squares support vector regression and swarm intelligence
CN118014123A (en) Energy consumption data early warning method, system, equipment and medium based on intelligent home
CN117706954B (en) Method and device for generating scene, storage medium and electronic device
CN117708680B (en) Method and device for improving accuracy of classification model, storage medium and electronic device
KR102305177B1 (en) Platform for gathering information for ai entities and method by using the same
WO2023173596A1 (en) Statement text intention recognition method and apparatus, storage medium, and electronic apparatus
CN116541166A (en) Super-computing power scheduling server and resource management method
CN116976491A (en) Information prediction method, device, equipment, storage medium and program product
CN114297498B (en) Opinion leader identification method and device based on key propagation structure perception
CN116958608A (en) Method, device, equipment, medium and program product for updating object recognition model
CN111090707A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant