CN116913274A - Scene generation method, device and storage medium based on generation type large model - Google Patents

Scene generation method, device and storage medium based on generation type large model Download PDF

Info

Publication number
CN116913274A
CN116913274A CN202310800447.6A CN202310800447A CN116913274A CN 116913274 A CN116913274 A CN 116913274A CN 202310800447 A CN202310800447 A CN 202310800447A CN 116913274 A CN116913274 A CN 116913274A
Authority
CN
China
Prior art keywords
scene
instruction
interaction
control
generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310800447.6A
Other languages
Chinese (zh)
Inventor
李阅苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd, Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202310800447.6A priority Critical patent/CN116913274A/en
Publication of CN116913274A publication Critical patent/CN116913274A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Stored Programmes (AREA)

Abstract

The application discloses a scene generation method, a device and a storage medium based on a generation type large model, which relate to the technical field of smart families, and the scene generation method based on the generation type large model comprises the following steps: identifying interaction data of the target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling the intelligent equipment; inputting a control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction; and inputting the scene generation template and the control instruction corresponding to the established scene type into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.

Description

Scene generation method, device and storage medium based on generation type large model
Technical Field
The application relates to the technical field of smart families, in particular to a scene generation method, device and storage medium based on a generation type large model.
Background
Currently, existing voice interactive scene creation techniques rely primarily on rule engines and natural language processing techniques, which have limitations. The rules used by the rule engine are generally manually written and fixed in format, and the method for creating the scenes by analyzing the rules in the fixed format by using the rule engine has great maintenance difficulty for the number of the rules which are huge and become larger along with the increase of the number of the scenes. The natural language processing technology cannot accurately understand the intention of the user when facing the description of the complex scene by the user, and has great defects in semantic understanding, so that the created scene cannot meet the user requirement.
In the related art, there is a technical problem of how to generate an interaction scene based on a large generation model.
Aiming at the technical problem of how to generate an interaction scene based on a large generation model in the related art, no effective solution is proposed yet.
Disclosure of Invention
The embodiment of the application provides a scene generation method, device and storage medium based on a large generation model, which at least solve the technical problem of how to generate an interactive scene based on the large generation model in the related technology.
According to an embodiment of the present application, there is provided a scene generating method based on a large model of a generation formula, including: identifying interaction data of a target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling intelligent equipment; inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction; and inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
In an exemplary embodiment, after identifying the interaction data of the target object, the method further includes: converting the instruction format of the control instruction into an instruction format corresponding to a preset instruction template to obtain a conversion result; generating an instruction format of the control instruction according to other instruction templates under the condition that the conversion result indicates that the conversion is successful, wherein the other instruction templates respectively correspond to different scene types; and under the condition that the conversion result indicates conversion failure, sending prompt information to the target object to prompt the instruction format conversion failure of the control instruction.
In one exemplary embodiment, the instruction format for generating the control instruction according to the other instruction templates includes: acquiring a timing interaction time period of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is the timing scene; setting a device execution instruction contained in the control request as a first execution instruction when a first execution condition is met, wherein the first execution condition is that the request interaction time in the control request belongs to the timing interaction time period; and generating the instruction format of the control instruction according to the request interaction time and the first execution instruction according to a first instruction template, wherein the other instruction templates at least comprise first instruction templates corresponding to scene types of the timing scene.
In one exemplary embodiment, the instruction format for generating the control instruction according to the other instruction templates includes: acquiring equipment interaction actions of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is an action linkage scene; setting a device execution instruction contained in the control request as a second execution instruction when a second execution condition is met, wherein the second execution condition is that a request interaction action in the control request is consistent with the device interaction action; and generating the instruction format of the control instruction by the request interaction action and the second execution instruction according to a second instruction template, wherein the other instruction templates at least comprise the second instruction template corresponding to the scene type of the action linkage scene.
In one exemplary embodiment, the instruction format for generating the control instruction according to the other instruction templates includes: acquiring a device interaction environment of the historical interaction scene and a control request of intelligent device for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is an environment linkage scene; setting a device execution instruction contained in the control request as a third execution instruction when a third execution condition is met, wherein the third execution condition is that a request interaction environment in the control request is consistent with the device interaction environment; and generating the instruction format of the control instruction by the request interaction environment and the third execution instruction according to a third instruction template, wherein the other instruction templates at least comprise the third instruction template corresponding to the scene type of the environment linkage scene.
In one exemplary embodiment, the instruction format for generating the control instruction according to the other instruction templates includes: acquiring a preset interaction instruction of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is a sound control scene; setting an equipment execution instruction contained in the control request as a fourth execution instruction when a fourth execution condition is met, wherein the fourth execution condition is that a request interaction instruction in the control request belongs to the preset interaction instruction and is consistent; and generating the instruction format of the control instruction by the request interaction instruction and the fourth execution instruction according to a fourth instruction template, wherein the other instruction templates at least comprise a fourth instruction template corresponding to the scene type of the voice control scene.
In an exemplary embodiment, before inputting the scene generation template corresponding to the scene type and the control instruction, which have been established, into the generative large model, the method further includes: determining all control instructions of the target object from the identification result; inputting all the control instructions and the instruction formatting templates into the large generation model to obtain formatted target control instructions output by the large generation model; wherein the instruction formatting template includes an instruction format of the target control instruction, and formatting operations on all the control instructions, the formatting operations including at least one of: modifying the interactive objects of all the control instructions into a first person name, formatting all the control instructions into a pray sentence, and deleting non-entity words in all the control instructions; and updating the control instruction into the target control instruction.
In an exemplary embodiment, inputting the created scene generation template corresponding to the scene type and the control instruction into the large generation model includes: acquiring a first device type of the intelligent device in the scene types from the scene generation template, wherein the first device attribute of the intelligent device and the device instruction supported by the intelligent device; and under the condition that the second equipment type of the intelligent equipment in the control instruction is consistent with the first equipment type, the second equipment attribute of the intelligent equipment in the control instruction is consistent with the first equipment attribute of the intelligent equipment, and the control instruction belongs to the equipment instruction supported by the intelligent equipment, inputting the established scene generation template corresponding to the scene type and the control instruction into the generation large model.
According to another aspect of the embodiment of the present application, there is also provided a scene generating device based on a large model of a generation formula, including: the result obtaining module is used for identifying interaction data of the target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling the intelligent equipment; the type obtaining module is used for inputting the control instruction and the scene classification template into a generation type large model to obtain the scene type of the target interaction scene output by the generation type large model, wherein the scene classification template at least comprises the corresponding relation between the scene type of the history interaction scene and the instruction format of the control instruction; and the scene generation module is used for inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
According to a further aspect of embodiments of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the above-described scene generation method based on a generative large model when run.
According to still another aspect of the embodiment of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the scene generating method based on the generated large model through the computer program.
In the embodiment of the application, a control instruction and a scene classification template which are included in an identification result obtained by identifying interaction data of a target object and used for controlling intelligent equipment are input into a generation type large model together to obtain a scene type of a target interaction scene output by the generation type large model, then a scene generation template and the control instruction which are corresponding to the established scene type are input into the generation type large model, and a target interaction scene is generated according to a scene script output by the generation type large model; by adopting the technical scheme, the technical problem of how to generate the interactive scene based on the large generation model in the related technology is solved, and the interactive scene which meets the requirements of the user can be generated based on the large generation model, so that the generation efficiency of the interactive scene is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of a scene generation method based on a large generated model according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of generating a scene based on a generative large model according to an embodiment of the application;
FIG. 3 is a schematic diagram of a method of generating a scene based on a generative large model according to an embodiment of the application;
FIG. 4 is a schematic diagram of a generated large model for scene classification of user utterances according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a generative large model generating a scene script for a user speaking in accordance with an embodiment of the application;
fig. 6 is a block diagram of a scene generating apparatus based on a generative large model according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of the embodiment of the application, a scene generation method based on a large generation type model is provided. The scene generation method based on the generation type large model is widely applied to full-house intelligent digital control application scenes such as intelligent Home (Smart Home), intelligent Home equipment ecology, intelligent Home (Intelligence House) ecology and the like. Alternatively, in the present embodiment, the above-described scene generation method based on the generation-formula large model may be applied to a hardware environment constituted by the terminal device 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the terminal device 102 through a network, and may be used to provide services (such as application services and the like) for a terminal or a client installed on the terminal, a database may be set on the server or independent of the server, for providing data storage services for the server 104, and cloud computing and/or edge computing services may be configured on the server or independent of the server, for providing data computing services for the server 104.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (Wireless Fidelity ), bluetooth. The terminal device 102 may not be limited to a PC, a mobile phone, a tablet computer, an intelligent air conditioner, an intelligent smoke machine, an intelligent refrigerator, an intelligent oven, an intelligent cooking range, an intelligent washing machine, an intelligent water heater, an intelligent washing device, an intelligent dish washer, an intelligent projection device, an intelligent television, an intelligent clothes hanger, an intelligent curtain, an intelligent video, an intelligent socket, an intelligent sound box, an intelligent fresh air device, an intelligent kitchen and toilet device, an intelligent bathroom device, an intelligent sweeping robot, an intelligent window cleaning robot, an intelligent mopping robot, an intelligent air purifying device, an intelligent steam box, an intelligent microwave oven, an intelligent kitchen appliance, an intelligent purifier, an intelligent water dispenser, an intelligent door lock, and the like.
In this embodiment, a scene generating method based on a large generated model is provided, and applied to the terminal device, fig. 2 is a flowchart of a scene generating method based on a large generated model according to an embodiment of the present application, where the flowchart includes the following steps:
step S202, identifying interaction data of a target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling intelligent equipment;
step S204, inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction;
the generated large model (generating Pre-trained Transformer) is a deep learning model trained based on data such as internet, and may be used for fine tuning to complete natural language processing tasks such as text generation, code generation, video generation, text question-and-answer, image generation, paper writing, film and television creation, scientific experimental design, and the like, and may include, but is not limited to, uniLM, BART, T and GPT models.
And S206, inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
Through the steps, the interactive data of the target object is identified, and an identification result is obtained, wherein the identification result at least comprises a control instruction for controlling the intelligent equipment; inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction; inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model, so that the technical problem of how to generate the interaction scene based on the generation type large model in the related technology is solved, and the interaction scene which meets the requirements of a user can be generated based on the generation type large model, thereby improving the generation efficiency of the interaction scene.
In step S202, the interaction data of the target object includes, but is not limited to, voice data, text data, gesture data, limb motion data, and the like. The interaction data may be determined according to different interaction modes, for example, and the different interaction modes may include, but are not limited to, a voice interaction mode, a text interaction mode, and a gesture interaction mode. Wherein the voice interaction mode corresponds to voice data, the text interaction mode corresponds to text data, and the gesture data and the limb action data can correspond to gesture interaction modes.
Optionally, for the step S206, if the scene is generated for multiple times, the scene generating template and the control instruction corresponding to the scene type that have been built need to be input into the generative large model each time the scene is generated. If the scene is generated once, the scene generating template corresponding to the established scene type and the control instruction are input into the large generating model for the first time, and for the situation that the large generating model is required to be reused, as the scene generating template is input, a new control instruction is directly input when the scene generating template is reused, and the scene generating template is not required to be input continuously and repeatedly.
In an exemplary embodiment, before performing the above step S202, the instruction format of the control instruction may be converted by: converting the instruction format of the control instruction into an instruction format corresponding to a preset instruction template to obtain a conversion result; generating an instruction format of the control instruction according to other instruction templates under the condition that the conversion result indicates that the conversion is successful, wherein the other instruction templates respectively correspond to different scene types; and under the condition that the conversion result indicates conversion failure, sending prompt information to the target object to prompt the instruction format conversion failure of the control instruction.
In the above embodiment, for example, the instruction format corresponding to the preset instruction template may be "when (time of day)", (please of day) "," when the control instruction is "when i say home", remembering to help me turn on the air conditioner ", the control instruction is converted into" when i say "when we say home", "turn on the air conditioner".
In one exemplary embodiment, for the process of generating the instruction format of the control instruction according to the other instruction templates, it may be implemented by: acquiring a timing interaction time period of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is the timing scene; setting a device execution instruction contained in the control request as a first execution instruction when a first execution condition is met, wherein the first execution condition is that the request interaction time in the control request belongs to the timing interaction time period; and generating the instruction format of the control instruction according to the request interaction time and the first execution instruction according to a first instruction template, wherein the other instruction templates at least comprise first instruction templates corresponding to scene types of the timing scene.
In the above embodiment, optionally, when the history interaction scenario is a timing scenario "when the air conditioner is turned on at 8 a.m., if the request interaction time in the control request is" 8 a ", the request interaction time belongs to the timing interaction time period, then the first execution condition is satisfied, and the first execution instruction" turn on the air conditioner "is executed.
If the request interaction time in the control request is 7 points and does not belong to the timing interaction time period, the timing interaction time period is updated by using the request interaction time, specifically, the timing interaction time period can be updated by the following processes: and determining the historical interaction time which is closest to the request interaction time from the timing interaction time period, and replacing the historical interaction time by using the request interaction time to obtain an updated timing interaction time period.
In an exemplary embodiment, the above procedure of generating the instruction format of the control instruction according to the other instruction templates may be further implemented by: acquiring equipment interaction actions of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is an action linkage scene; setting a device execution instruction contained in the control request as a second execution instruction when a second execution condition is met, wherein the second execution condition is that a request interaction action in the control request is consistent with the device interaction action; and generating the instruction format of the control instruction by the request interaction action and the second execution instruction according to a second instruction template, wherein the other instruction templates at least comprise the second instruction template corresponding to the scene type of the action linkage scene.
In the above embodiment, optionally, when the history interaction scenario is an action linkage scenario "open air conditioner when opening a door", if the request interaction in the control request is regarded as "closed door", and the device interaction action "open door" of the history interaction scenario is inconsistent, the second execution condition is not satisfied, the second execution instruction "open air conditioner" is not executed, and if the request interaction in the control request is regarded as "open door", and the device interaction action "open door" of the history interaction scenario is consistent, the second execution condition is not satisfied, and the second execution instruction "open air conditioner" is executed.
In an exemplary embodiment, the process of generating the instruction format of the control instruction according to the other instruction templates may be implemented through the following technical scheme, and the specific steps include: acquiring a device interaction environment of the historical interaction scene and a control request of intelligent device for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is an environment linkage scene; setting a device execution instruction contained in the control request as a third execution instruction when a third execution condition is met, wherein the third execution condition is that a request interaction environment in the control request is consistent with the device interaction environment; and generating the instruction format of the control instruction by the request interaction environment and the third execution instruction according to a third instruction template, wherein the other instruction templates at least comprise the third instruction template corresponding to the scene type of the environment linkage scene.
In the above embodiment, optionally, when the history interaction scenario is an environment linkage scenario "when the temperature is higher than 30 degrees celsius, if the temperature of the request interaction environment in the control request is 25 degrees celsius and the temperature of the equipment interaction environment of the history interaction scenario is not consistent with the temperature of the equipment interaction environment of the history interaction scenario is higher than 30 degrees celsius," the third execution condition is not satisfied, the third execution instruction "open air conditioner" is not executed, and if the temperature of the request interaction environment in the control request is 35 degrees celsius and the temperature of the equipment interaction environment of the history interaction scenario is consistent with the temperature of the equipment interaction environment of the history interaction scenario is higher than 30 degrees celsius, "the third execution condition is satisfied, and the third execution instruction" open air conditioner "is executed.
In an exemplary embodiment, the technical solution of generating the instruction format of the control instruction according to the other instruction templates may also be implemented by: acquiring a preset interaction instruction of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is a sound control scene; setting an equipment execution instruction contained in the control request as a fourth execution instruction when a fourth execution condition is met, wherein the fourth execution condition is that a request interaction instruction in the control request belongs to the preset interaction instruction and is consistent; and generating the instruction format of the control instruction by the request interaction instruction and the fourth execution instruction according to a fourth instruction template, wherein the other instruction templates at least comprise a fourth instruction template corresponding to the scene type of the voice control scene.
In the above embodiment, optionally, the history interaction scenario is a sound control scenario "when i say" go home ", when the air conditioner is turned on", when the user speaks a voice command of "go home", the fourth execution condition is satisfied, and the fourth execution command is executed to "turn on the air conditioner".
In an exemplary embodiment, before performing step S204 described above, the control instruction may also be formatted by: determining all control instructions of the target object from the identification result; inputting all the control instructions and the instruction formatting templates into the large generation model to obtain formatted target control instructions output by the large generation model; wherein the instruction formatting template includes an instruction format of the target control instruction, and formatting operations on all the control instructions, the formatting operations including at least one of: modifying the interactive objects of all the control instructions into a first person name, formatting all the control instructions into a pray sentence, and deleting non-entity words in all the control instructions; and updating the control instruction into the target control instruction.
In the above embodiment, optionally, when the identification result is "when me says 'go out' and remembers help me to turn off the air conditioner", the control instruction determined from the identification result is "remembers help me to turn off the air conditioner", and the control instruction may be formatted into a pray style of "turn off the air conditioner".
In an exemplary embodiment, for the implementation process of generating the target interaction scenario according to the scenario script output by the large model in step S206, the implementation process may further include: acquiring a first device type of the intelligent device in the scene types from the scene generation template, wherein the first device attribute of the intelligent device and the device instruction supported by the intelligent device; and under the condition that the second equipment type of the intelligent equipment in the control instruction is consistent with the first equipment type, the second equipment attribute of the intelligent equipment in the control instruction is consistent with the first equipment attribute of the intelligent equipment, and the control instruction belongs to the equipment instruction supported by the intelligent equipment, inputting the established scene generation template corresponding to the scene type and the control instruction into the generation large model.
In order to better understand the process of the scenario generation method based on the large generation model, the following describes the implementation method flow of the scenario generation based on the large generation model with reference to the alternative embodiment, but is not limited to the technical scheme of the embodiment of the present application.
In this embodiment, a scenario generation method based on a generative large model is provided in conjunction with fig. 3, and fig. 3 is a schematic diagram of a scenario generation method based on a generative large model according to an embodiment of the present application, and specific steps may include:
step 1: and (3) receiving a user speech (equivalent to the interactive data), judging whether to start the scene self-arrangement, and when the user speaks ' starting the scene self-arrangement ', ' creating the scene ', starting the scene self-arrangement ' to broadcast ' good ' to the user in a voice way, and executing the next step.
Step 2: after each time of speech, the user is broadcasted with 'good, received' and the speech content is recorded.
Step 3: invoking GPT (corresponding to the generated large model) to classify the scenes of the user's speech, wherein the specific classification process comprises: judging whether the user's speech can be converted to a "when (time) (please say|want)," a condition (corresponding to the instruction format described above), if it is not possible to convert to a condition of "when (time) (please say|want)," a condition of "determines that the scene classification result is scenettype=0, reports to the user that" action analysis failed, please refer to a trial bar ", and if it is not possible to convert to a condition of" when (time|), (please say i want), "a condition of" for 3 times, reports: "action resolution failed, if the skill needs to be exited, please say" exit' "to me. If the user utterance can be converted into a "when.(time of the day), (please want)," a treaty formula, the next step is entered.
Step 4: the scenario is classified into a timing type scenario (corresponding to the timing scenario described above) by classifying the scenario in the case of the condition sentence of "when (time) (please: want) of" in step 3, "if it is a timing request, i.e.," when certain actions are performed temporarily at a certain time ", the scenario=1. If it is a linkage request, that is, "when a certain device or a space environment state occurs, or when a certain device performs an operation is performed, a certain action is performed", then scenetype=2, the scene is classified as a linkage class scene (corresponding to the action linkage scene or the environment linkage scene described above), and if it is a voice command, that is, "when a user speaks a certain sentence specific voice command, a certain action is performed", then scenetype=3, the scene is classified as a voice class scene (corresponding to the voice control scene described above).
Further, after completing the scene classification, "do other actions need to be added to the user? ".
Step 5: when the user is detected to say "not good, and the user is right in the process", the GPT model is called to summarize all the speaking records of the user. The scene summary Prompt can be used for inputting the scene summary Prompt into the GPT, so that the GPT is utilized for sorting the request input by the user, and the first person is used for outputting the complete passthrough praying sentence.
The scene summary promt may be expressed as follows, for example: "
prompt=”
system,
Your task is to sort through the input request, and output the complete passable pray sentence with the first person. ' no subject should appear later. The words irrelevant to the trigger conditions and trigger actions are removed by removing the words of the mood, the auxiliary verbs and the adverbs.
User,
When the person returns home, the person remembers to help the person to start the air conditioner;
Assistant,
when i say 'go home', the air conditioner is turned on. ".
”。
Step 6: voice-announce to the user "whether or not to generate a { scene summary } scene for you? "and asynchronously calling GPT to generate scene script;
the GPT generation scenario may be expressed as follows, for example: "
If scenettype=1, generate { "if" based on a speech similar to "when 8 points per day, air conditioner is turned on and tuned to 26 degrees": { "time": "8:00"," repeat ": "daily" }, "then": [ { "air conditioner": "onoffstatus= true and temperature =26" } ].
If scenettype=2, generate { "if" based on a speech similar to "turn on air conditioner when opening door and tune to 26 degrees": { "door lock": "onoffstatus=true" }, "then": [ { "air conditioner": "onoffstatus= true and temperature =26" } ].
If scenettype=3, generate { "if" based on a speech similar to "when i say me back, turn on the air conditioner and tune to 26 degrees": { "Voice Command": "coming home," }, "the": [ { "air conditioner": "onoffstatus= true and temperature =26" } ]. At least 3 words which are the same as or similar to 'back' are added to the voice command.
”。
Step 7: if the user reply 'yes' is detected, the scene engine interface is called by the scene script generated in the step 6 to create a scene; if the user reply is detected to be 'no', the user speaking record and the scene summary and the generated scene script are cleared.
In any of the above steps, when the user utters "exit" is detected, the user is broadcasted with voice "good, and the next bye" is expected, and the self-arrangement of the scene is stopped.
In the above embodiment, optionally, the specific process of classifying the user's speech in the steps 3 to 4 may be described with reference to the schematic diagram shown in fig. 4, and fig. 4 is a schematic diagram of classifying the user's speech by using a generated large model according to an embodiment of the present application, which specifically includes the following steps:
step 1: invoking GPT to perform scene classification on the user's speech of opening the air conditioner at 8 a day in the morning ', judging that the user's speech can be converted into a conditional sentence of opening the air conditioner at 8 a day in the morning, and entering the next step;
step 2: "when 8 a day, the air conditioner is turned on" belongs to a timing request, i.e. "when a certain moment comes, certain actions are performed", scenettype=1, scenes are classified as timing type scenes;
Step 3: and outputting a scene type 'scenetype=1' corresponding to the user speech.
In the foregoing embodiment, optionally, the specific process of generating the scene script in step 6 may be described with reference to the schematic diagram shown in fig. 5, where fig. 5 is a schematic diagram of generating a large model to speak a user to generate the scene script according to an embodiment of the present application, and the specific steps are as follows:
step 1: parsing json formatted data, such as { scenedesc: "8 am a day", scenettype=1 };
step 2: judging scene type, if scenetype=1, generating script { "if" based on a speech similar to "when 8 points per day, air conditioner is turned on and tuned to 26 degrees": { "time": "8:00"," repeat ": "daily" }, "then": [ { "air conditioner": "onoffstatus= true and temperature =26" } ];
if scenettype=2, a script { "if" is generated based on a speech similar to "turn on air conditioner when opening door and tune to 26 degrees": { "door lock": "onoffstatus=true" }, "then": [ { "air conditioner": "onoffstatus= true and temperature =26" } ];
if scenettype=3, a script { "if" is generated based on a speech similar to "when i say me back, turn on the air conditioner and tune to 26 degrees": { "Voice Command": "coming home," }, "the": [ { "air conditioner": "onoffstatus= true and temperature =26" } ] }, wherein the voice command contains at least 3 words that are the same as or similar to' back;
Step 3: and outputting the scene script.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the various embodiments of the present application.
FIG. 6 is a block diagram of a scene generation device based on a generative large model according to an embodiment of the application; as shown in fig. 6, includes:
the result obtaining module 62 is configured to identify interaction data of a target object, and obtain an identification result, where the identification result at least includes a control instruction for controlling the intelligent device;
the type obtaining module 64 is configured to input the control instruction and a scene classification template into a generative large model, and obtain a scene type of a target interaction scene output by the generative large model, where the scene classification template at least includes a correspondence between a scene type of a historical interaction scene and an instruction format of the control instruction;
The scene generation module 66 is configured to input the established scene generation template corresponding to the scene type and the control instruction into the generative large model, and generate a target interaction scene according to the scene script output by the generative large model.
Through the device, the identification result is obtained by identifying the interactive data of the target object, wherein the identification result at least comprises a control instruction for controlling the intelligent equipment; inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction; inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model, so that the technical problem of how to generate the interaction scene based on the generation type large model in the related technology is solved, and the interaction scene which meets the requirements of a user can be generated based on the generation type large model, thereby improving the generation efficiency of the interaction scene.
Optionally, the scene generating device based on the large generating model further includes a format conversion module, configured to convert an instruction format of the control instruction into an instruction format corresponding to a preset instruction template, so as to obtain a conversion result; generating an instruction format of the control instruction according to other instruction templates under the condition that the conversion result indicates that the conversion is successful, wherein the other instruction templates respectively correspond to different scene types; and under the condition that the conversion result indicates conversion failure, sending prompt information to the target object to prompt the instruction format conversion failure of the control instruction.
Optionally, the format conversion module further includes a first format generating unit, configured to obtain a timing interaction time period of the historical interaction scenario and a control request of an intelligent device for controlling the historical interaction scenario, where a scenario type of the historical interaction scenario is a timing scenario; setting a device execution instruction contained in the control request as a first execution instruction when a first execution condition is met, wherein the first execution condition is that the request interaction time in the control request belongs to the timing interaction time period; and generating the instruction format of the control instruction according to the request interaction time and the first execution instruction according to a first instruction template, wherein the other instruction templates at least comprise first instruction templates corresponding to scene types of the timing scene.
Optionally, the format conversion module further includes a second format generating unit, configured to obtain a device interaction of the historical interaction scenario and a control request of an intelligent device for controlling the historical interaction scenario, where a scenario type of the historical interaction scenario is an action linkage scenario; setting a device execution instruction contained in the control request as a second execution instruction when a second execution condition is met, wherein the second execution condition is that a request interaction action in the control request is consistent with the device interaction action; and generating the instruction format of the control instruction by the request interaction action and the second execution instruction according to a second instruction template, wherein the other instruction templates at least comprise the second instruction template corresponding to the scene type of the action linkage scene.
Optionally, the format conversion module further includes a third format generating unit, configured to obtain a device interaction environment of the historical interaction scene, and a control request for controlling an intelligent device of the historical interaction scene, where a scene type of the historical interaction scene is an environment linkage scene; setting a device execution instruction contained in the control request as a third execution instruction when a third execution condition is met, wherein the third execution condition is that a request interaction environment in the control request is consistent with the device interaction environment; and generating the instruction format of the control instruction by the request interaction environment and the third execution instruction according to a third instruction template, wherein the other instruction templates at least comprise the third instruction template corresponding to the scene type of the environment linkage scene.
Optionally, the format conversion module further includes a fourth format generating unit, configured to obtain a preset interaction instruction of the historical interaction scene and a control request of the intelligent device for controlling the historical interaction scene, where a scene type of the historical interaction scene is a sound control scene; setting an equipment execution instruction contained in the control request as a fourth execution instruction when a fourth execution condition is met, wherein the fourth execution condition is that a request interaction instruction in the control request belongs to the preset interaction instruction and is consistent; and generating the instruction format of the control instruction by the request interaction instruction and the fourth execution instruction according to a fourth instruction template, wherein the other instruction templates at least comprise a fourth instruction template corresponding to the scene type of the voice control scene.
Optionally, the scene generating device based on the large model further includes an instruction updating module, configured to determine all control instructions of the target object from the identification result; inputting all the control instructions and the instruction formatting templates into the large generation model to obtain formatted target control instructions output by the large generation model; wherein the instruction formatting template includes an instruction format of the target control instruction, and formatting operations on all the control instructions, the formatting operations including at least one of: modifying the interactive objects of all the control instructions into a first person name, formatting all the control instructions into a pray sentence, and deleting non-entity words in all the control instructions; and updating the control instruction into the target control instruction.
Optionally, the scene generating module 66 is further configured to obtain, from the scene generating template, a first device type of the smart device among the scene types, a first device attribute of the smart device, and a device instruction supported by the smart device; and under the condition that the second equipment type of the intelligent equipment in the control instruction is consistent with the first equipment type, the second equipment attribute of the intelligent equipment in the control instruction is consistent with the first equipment attribute of the intelligent equipment, and the control instruction belongs to the equipment instruction supported by the intelligent equipment, inputting the established scene generation template corresponding to the scene type and the control instruction into the generation large model.
An embodiment of the present application also provides a storage medium including a stored program, wherein the program executes the method of any one of the above.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store program code for performing the steps of:
s1, identifying interaction data of a target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling intelligent equipment;
S2, inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction;
and S3, inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, identifying interaction data of a target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling intelligent equipment;
S2, inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction;
and S3, inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (11)

1. A scene generation method based on a large generation model, comprising the steps of:
identifying interaction data of a target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling intelligent equipment;
inputting the control instruction and a scene classification template into a generation type large model to obtain a scene type of a target interaction scene output by the generation type large model, wherein the scene classification template at least comprises a corresponding relation between the scene type of a historical interaction scene and an instruction format of the control instruction;
and inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
2. The scene generation method based on the generated large model according to claim 1, wherein after identifying the interactive data of the target object to obtain the identification result, the method further comprises:
Converting the instruction format of the control instruction into an instruction format corresponding to a preset instruction template to obtain a conversion result;
generating an instruction format of the control instruction according to other instruction templates under the condition that the conversion result indicates that the conversion is successful, wherein the other instruction templates respectively correspond to different scene types;
and under the condition that the conversion result indicates conversion failure, sending prompt information to the target object to prompt the instruction format conversion failure of the control instruction.
3. The scene generation method based on the generative large model according to claim 2, wherein generating the instruction format of the control instruction according to the other instruction templates comprises:
acquiring a timing interaction time period of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is the timing scene;
setting a device execution instruction contained in the control request as a first execution instruction when a first execution condition is met, wherein the first execution condition is that the request interaction time in the control request belongs to the timing interaction time period;
And generating the instruction format of the control instruction according to the request interaction time and the first execution instruction according to a first instruction template, wherein the other instruction templates at least comprise first instruction templates corresponding to scene types of the timing scene.
4. The scene generation method based on the generative large model according to claim 2, wherein generating the instruction format of the control instruction according to the other instruction templates comprises:
acquiring equipment interaction actions of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is an action linkage scene; setting a device execution instruction contained in the control request as a second execution instruction when a second execution condition is met, wherein the second execution condition is that a request interaction action in the control request is consistent with the device interaction action;
and generating the instruction format of the control instruction by the request interaction action and the second execution instruction according to a second instruction template, wherein the other instruction templates at least comprise the second instruction template corresponding to the scene type of the action linkage scene.
5. The scene generation method based on the generative large model according to claim 2, wherein generating the instruction format of the control instruction according to the other instruction templates comprises:
acquiring a device interaction environment of the historical interaction scene and a control request of intelligent device for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is an environment linkage scene; setting a device execution instruction contained in the control request as a third execution instruction when a third execution condition is met, wherein the third execution condition is that a request interaction environment in the control request is consistent with the device interaction environment;
and generating the instruction format of the control instruction by the request interaction environment and the third execution instruction according to a third instruction template, wherein the other instruction templates at least comprise the third instruction template corresponding to the scene type of the environment linkage scene.
6. The scene generation method based on the generative large model according to claim 2, wherein generating the instruction format of the control instruction according to the other instruction templates comprises:
acquiring a preset interaction instruction of the historical interaction scene and a control request of intelligent equipment for controlling the historical interaction scene, wherein the scene type of the historical interaction scene is a sound control scene;
Setting an equipment execution instruction contained in the control request as a fourth execution instruction when a fourth execution condition is met, wherein the fourth execution condition is that a request interaction instruction in the control request belongs to the preset interaction instruction and is consistent;
and generating the instruction format of the control instruction by the request interaction instruction and the fourth execution instruction according to a fourth instruction template, wherein the other instruction templates at least comprise a fourth instruction template corresponding to the scene type of the voice control scene.
7. The scene generation method based on a generative large model according to claim 1, wherein before inputting the scene generation template corresponding to the scene type and the control instruction, which have been established, into the generative large model, the method further comprises:
determining all control instructions of the target object from the identification result;
inputting all the control instructions and the instruction formatting templates into the large generation model to obtain formatted target control instructions output by the large generation model;
wherein the instruction formatting template includes an instruction format of the target control instruction, and formatting operations on all the control instructions, the formatting operations including at least one of: modifying the interactive objects of all the control instructions into a first person name, formatting all the control instructions into a pray sentence, and deleting non-entity words in all the control instructions;
And updating the control instruction into the target control instruction.
8. The scene generation method based on the generative large model according to claim 1, wherein inputting the scene generation template corresponding to the scene type and the control instruction, which have been established, into the generative large model, comprises:
acquiring a first device type of the intelligent device in the scene types from the scene generation template, wherein the first device attribute of the intelligent device and the device instruction supported by the intelligent device;
and under the condition that the second equipment type of the intelligent equipment in the control instruction is consistent with the first equipment type, the second equipment attribute of the intelligent equipment in the control instruction is consistent with the first equipment attribute of the intelligent equipment, and the control instruction belongs to the equipment instruction supported by the intelligent equipment, inputting the established scene generation template corresponding to the scene type and the control instruction into the generation large model.
9. A scene generation apparatus based on a large model of generation type, comprising:
the result obtaining module is used for identifying interaction data of the target object to obtain an identification result, wherein the identification result at least comprises a control instruction for controlling the intelligent equipment;
The type obtaining module is used for inputting the control instruction and the scene classification template into a generation type large model to obtain the scene type of the target interaction scene output by the generation type large model, wherein the scene classification template at least comprises the corresponding relation between the scene type of the history interaction scene and the instruction format of the control instruction;
and the scene generation module is used for inputting the established scene generation template corresponding to the scene type and the control instruction into the generation type large model, and generating a target interaction scene according to the scene script output by the generation type large model.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any of the preceding claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 8 by means of the computer program.
CN202310800447.6A 2023-06-30 2023-06-30 Scene generation method, device and storage medium based on generation type large model Pending CN116913274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310800447.6A CN116913274A (en) 2023-06-30 2023-06-30 Scene generation method, device and storage medium based on generation type large model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310800447.6A CN116913274A (en) 2023-06-30 2023-06-30 Scene generation method, device and storage medium based on generation type large model

Publications (1)

Publication Number Publication Date
CN116913274A true CN116913274A (en) 2023-10-20

Family

ID=88362025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310800447.6A Pending CN116913274A (en) 2023-06-30 2023-06-30 Scene generation method, device and storage medium based on generation type large model

Country Status (1)

Country Link
CN (1) CN116913274A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117706954A (en) * 2024-02-06 2024-03-15 青岛海尔科技有限公司 Method and device for generating scene, storage medium and electronic device
CN117706954B (en) * 2024-02-06 2024-05-24 青岛海尔科技有限公司 Method and device for generating scene, storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117706954A (en) * 2024-02-06 2024-03-15 青岛海尔科技有限公司 Method and device for generating scene, storage medium and electronic device
CN117706954B (en) * 2024-02-06 2024-05-24 青岛海尔科技有限公司 Method and device for generating scene, storage medium and electronic device

Similar Documents

Publication Publication Date Title
JP6726800B2 (en) Method and apparatus for human-machine interaction based on artificial intelligence
JP6671020B2 (en) Dialogue act estimation method, dialogue act estimation device and program
CN108255934B (en) Voice control method and device
CN107909998A (en) Phonetic order processing method, device, computer equipment and storage medium
JP3737714B2 (en) Method and apparatus for identifying end-user transactions
JP6823809B2 (en) Dialogue estimation method, dialogue activity estimation device and program
WO2023168838A1 (en) Sentence text recognition method and apparatus, and storage medium and electronic apparatus
KR101916174B1 (en) Method and apparatus for processing language based on machine learning
CN111930912A (en) Dialogue management method, system, device and storage medium
CN111933135A (en) Terminal control method and device, intelligent terminal and computer readable storage medium
CN112667791A (en) Latent event prediction method, device, equipment and storage medium
CN116913274A (en) Scene generation method, device and storage medium based on generation type large model
WO2023173596A1 (en) Statement text intention recognition method and apparatus, storage medium, and electronic apparatus
JP6633556B2 (en) Acoustic model learning device, speech recognition device, acoustic model learning method, speech recognition method, and program
JP2006072477A (en) Dialogue strategy learning method, program, and device, and storage medium
US11941414B2 (en) Unstructured extensions to rpa
CN117689020B (en) Method and device for constructing intelligent home body based on large model and electronic equipment
CN117807215B (en) Statement multi-intention recognition method, device and equipment based on model
CN114911535B (en) Application program component configuration method, storage medium and electronic device
CN116504222A (en) Text conversion method and device, storage medium and electronic device
CN117010378A (en) Semantic conversion method and device, storage medium and electronic device
CN117765949B (en) Semantic dependency analysis-based statement multi-intention recognition method and device
CN117370544A (en) Updating method and device for data annotation, storage medium and electronic device
US20220137917A1 (en) Method and system for assigning unique voice for electronic device
CN117689020A (en) Method and device for constructing intelligent home body based on large model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination