CN113741910A - Scene interaction method and device, electronic equipment and storage medium - Google Patents

Scene interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113741910A
CN113741910A CN202111020597.2A CN202111020597A CN113741910A CN 113741910 A CN113741910 A CN 113741910A CN 202111020597 A CN202111020597 A CN 202111020597A CN 113741910 A CN113741910 A CN 113741910A
Authority
CN
China
Prior art keywords
scene
information
output
command
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111020597.2A
Other languages
Chinese (zh)
Inventor
冷冰
叶建云
张广程
吴晓明
徐慧敏
张义保
吴佳飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202111020597.2A priority Critical patent/CN113741910A/en
Publication of CN113741910A publication Critical patent/CN113741910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a scene interaction method and apparatus, an electronic device, and a storage medium, where the method is applied to a server, where the server runs at least one service process, each service process is deployed with at least one scene, each scene includes an input interface and an output interface, and the method includes: under the condition that an input interface of a first scene receives input information, executing corresponding processing on the input information according to the category of the input information to obtain a processing result; the first scene is any scene deployed in the service process, and the input information comprises at least one of access information of the terminal, equipment state information of the terminal, external data information, first event information and first alarm information; and in the case that the processing result includes output information, outputting the output information through an output interface of the first scene. The disclosed embodiments can achieve efficient digital management.

Description

Scene interaction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a scene interaction method and apparatus, an electronic device, and a storage medium.
Background
With the development of technologies such as cloud computing, internet of things, 5G, artificial intelligence and the like, people can gradually recognize, control and cooperate various application scenes in a full-factor digitalization, cooperative and intelligent manner, so that the processing of various application scenes becomes more active, efficient and detailed. The application scenes can include application scenes of the whole field such as urban operation, urban management and the like, and can also include application scenes of subdivided fields such as intelligent traffic, intelligent cells, intelligent fire fighting, intelligent medical treatment, education and the like.
However, there are usually a plurality of heterogeneous devices/systems in the application scenario, various intelligent hardware, sensor data, and structured information in various different scenarios need to be accessed to and processed by a system platform (e.g., an internet of things platform, a digital twin platform, etc.), the system platform needs to realize collaboration of the plurality of heterogeneous devices/systems, the real-time requirement on the system platform is high, and the difficulty in building and debugging the system platform is high.
Disclosure of Invention
The present disclosure provides a scene interaction technical solution.
According to an aspect of the present disclosure, a scene interaction method is provided, which is applied to a server, where at least one service process runs in the server, at least one scene is deployed in each service process, each scene includes an input interface and an output interface, and the method includes:
under the condition that an input interface of a first scene receives input information, executing corresponding processing on the input information according to the category of the input information to obtain a processing result; the first scene is any scene deployed in the service process, and the input information includes at least one of access information of a terminal, device state information of the terminal, external data information, first event information and first alarm information; and outputting the output information through an output interface of the first scene under the condition that the processing result comprises the output information.
In a possible implementation manner, the performing, according to the category of the input information, corresponding processing on the input information to obtain a processing result includes: and confirming the configuration information of the terminal under the condition that the input information comprises access information of the terminal to obtain a configuration confirmation result, wherein the configuration information comprises at least one of equipment identification, geographical position information, equipment basic information, alarm category, real-time requirement information and acquisition interval information of the terminal, and the processing result comprises the configuration confirmation result.
In a possible implementation manner, the performing, according to the category of the input information, corresponding processing on the input information to obtain a processing result includes: under the condition that the input information comprises equipment state information of a terminal, judging whether the terminal is in an abnormal state or not according to the equipment state information; generating second alarm information under the condition that the terminal is in an abnormal state, wherein the processing result comprises the second alarm information; outputting the second warning information in the first scene through the output device of the first scene; wherein, under the condition that the processing result includes output information, outputting the output information through the output interface of the first scene includes: and outputting the second warning information through an output interface of the first scene according to the scene organization information of the first scene, wherein the scene organization information indicates the incidence relation between the first scene and other scenes.
In a possible implementation manner, the performing, according to the category of the input information, corresponding processing on the input information to obtain a processing result includes: under the condition that the input information comprises external data information, processing the data information according to the data information and scene configuration information of the first scene to obtain second event information corresponding to the data information, wherein the processing result comprises the second event information; outputting, by an output device of the first scene, the second event information in the first scene; wherein, under the condition that the processing result includes output information, outputting the output information through the output interface of the first scene includes: and outputting the second event information through an output interface of the first scene according to scene organization information of the first scene, wherein the scene organization information indicates an incidence relation between the first scene and the second scene.
In a possible implementation manner, the performing, according to the category of the input information, corresponding processing on the input information to obtain a processing result includes: determining an output mode of the first event information and/or the first warning information according to scene configuration information and scene organization information of the first scene under the condition that the input information comprises the first event information and/or the first warning information, wherein the output mode comprises scene internal output or scene external output; wherein, under the condition that the processing result includes output information, outputting the output information through the output interface of the first scene includes: and under the condition that the output mode is scene external output, outputting the first event information and/or the first alarm information through an output interface of the first scene.
In one possible implementation, the first scenario further includes a command input interface and a command output interface, and the method further includes: under the condition that a command input interface of the first scene receives command information, determining whether the command information is command information aiming at the first scene according to scene configuration information of the first scene; and if the command information is command information for the first scene, sending the command information to a terminal associated with the first scene so as to enable the terminal to execute the command information.
In one possible implementation, the method further includes: under the condition that a command input interface of the first scene receives command information, determining whether the command information meets a forwarding condition according to scene organization information of the first scene; and outputting the command information through a command output interface of the first scene under the condition that the command information meets a forwarding condition.
In one possible implementation, the method further includes: responding to scene creating operation, and distributing a scene identification for a first scene to be created; in response to a configuration operation on the first scene, determining scene configuration information of the first scene, wherein the scene configuration information comprises device organization information and scene basic information of the first scene; in response to an association operation between the first scene and a second scene, determining scene organization information of the first scene, wherein the scene organization information indicates an association relationship between the first scene and the second scene; deploying the first scene into a preset service process; the device organization information comprises at least one of a device type of a terminal corresponding to the first scene, a command receivable by a device, a device function and an interface parameter of the device; the scene basic information includes at least one of an input interface, an output interface, a scene identifier, a scene type, a scene name, and scene function information of the first scene.
According to an aspect of the present disclosure, a scene interaction apparatus is provided, which is applied to a server, where at least one service process runs in the server, at least one scene is deployed in each service process, each scene includes an input interface and an output interface, and the apparatus includes:
the information processing module is used for executing corresponding processing on the input information according to the category of the input information under the condition that the input information is received by the input interface of the first scene to obtain a processing result; the first scene is any scene deployed in the service process, and the input information includes at least one of access information of a terminal, device state information of the terminal, external data information, first event information and first alarm information;
and the information output module is used for outputting the output information through the output interface of the first scene under the condition that the processing result comprises the output information.
In one possible implementation manner, the information processing module is configured to: and confirming the configuration information of the terminal under the condition that the input information comprises access information of the terminal to obtain a configuration confirmation result, wherein the configuration information comprises at least one of equipment identification, geographical position information, equipment basic information, alarm category, real-time requirement information and acquisition interval information of the terminal, and the processing result comprises the configuration confirmation result.
In one possible implementation manner, the information processing module is configured to: under the condition that the input information comprises equipment state information of a terminal, judging whether the terminal is in an abnormal state or not according to the equipment state information; generating second alarm information under the condition that the terminal is in an abnormal state, wherein the processing result comprises the second alarm information; outputting the second warning information in the first scene through the output device of the first scene;
wherein the information output module is configured to: and outputting the second warning information through an output interface of the first scene according to the scene organization information of the first scene, wherein the scene organization information indicates the incidence relation between the first scene and other scenes.
In one possible implementation manner, the information processing module is configured to: under the condition that the input information comprises external data information, processing the data information according to the data information and scene configuration information of the first scene to obtain second event information corresponding to the data information, wherein the processing result comprises the second event information; outputting, by an output device of the first scene, the second event information in the first scene; wherein the information output module is configured to: and outputting the second event information through an output interface of the first scene according to scene organization information of the first scene, wherein the scene organization information indicates an incidence relation between the first scene and the second scene.
In a possible implementation manner, the information output module is configured to: determining an output mode of the first event information and/or the first warning information according to scene configuration information and scene organization information of the first scene under the condition that the input information comprises the first event information and/or the first warning information, wherein the output mode comprises scene internal output or scene external output; wherein the information output module is configured to: and under the condition that the output mode is scene external output, outputting the first event information and/or the first alarm information through an output interface of the first scene.
In one possible implementation, the first scenario further includes a command input interface and a command output interface, and the apparatus further includes: the command information determining module is used for determining whether the command information is command information aiming at the first scene according to scene configuration information of the first scene under the condition that a command input interface of the first scene receives the command information; and the command executing module is used for sending the command information to a terminal associated with the first scene to enable the terminal to execute the command information when the command information is the command information aiming at the first scene.
In one possible implementation, the apparatus further includes: the condition determining module is used for determining whether the command information meets forwarding conditions according to scene organization information of the first scene under the condition that a command input interface of the first scene receives the command information; and the command output module is used for outputting the command information through a command output interface of the first scene under the condition that the command information meets the forwarding condition.
In one possible implementation, the apparatus further includes: the scene identification distribution module is used for responding to scene creation operation and distributing scene identifications to a first scene to be created; a scene configuration module, configured to determine scene configuration information of the first scene in response to a configuration operation on the first scene, where the scene configuration information includes device organization information and scene basic information of the first scene; a scene organization module, configured to determine scene organization information of the first scene in response to an association operation between the first scene and a second scene, where the scene organization information indicates an association relationship between the first scene and the second scene; the scene deployment module is used for deploying the first scene into a preset service process; the device organization information comprises at least one of a device type of a terminal corresponding to the first scene, a command receivable by a device, a device function and an interface parameter of the device; the scene basic information includes at least one of an input interface, an output interface, a scene identifier, a scene type, a scene name, and scene function information of the first scene.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a scene is deployed in a service process, the scene includes an input interface and an output interface, the scene can receive input information through the input interface, and corresponding processing is executed according to the category of the input information to obtain a processing result; when the processing result comprises the output information, the output information is output through the output interface, so that the data stream flows according to the organization form defined by the scene, and the efficient digital management is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a scene interaction method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of a scene interaction process according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a scene interaction device according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 5 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, a digital twin technology can be adopted to digitally manage application scenes. For example, the digital twin technology is applied to smart city construction, and digital city pre-construction, pre-judgment and prevention are used for supporting high-quality development of real cities. The digital twin city based on the digital twin technology increases city information model mapping on the basis of platforms such as an Internet of things platform, a big data platform, an AI technology enabling and application supporting platform and the like constructed in the traditional smart city, and can realize accurate real-time cognition, control and cooperation of a macro multi-heterogeneous system. The key point of digital twin construction is the cooperation of numerous heterogeneous devices/systems, and the difficulty in constructing and debugging a system platform is high.
According to the scene interaction method, the whole application scene can be divided into a plurality of scenes, and the plurality of scenes are combined into the whole application scene in a crossed stereo mode. There may be a plurality of scenes for different kinds of people, things, and the like in the same space. The digital twin corresponding to each scene has its own configuration information and scene data. These configuration information are organized as digital twin information, and each smart device and sensor are associated with a scene. A simple scene may consist of several sensors, devices or products; larger, complex scenes may be combined from multiple simple scenes. After the definition of the scene is completed, the data report and the command in the scene are transmitted according to the interface, and the data stream flows according to the organization form of the scene definition. The digital twin of the scene can be indexed according to the scene identification SID, and then the display, the sending of control commands and the like are carried out, thereby realizing the high-efficiency digital management.
Fig. 1 shows a flowchart of a scene interaction method according to an embodiment of the present disclosure. The method can be applied to a server, such as a cloud server and the like. The server runs at least one service process, each service process is provided with at least one scene, and each scene comprises an input interface and an output interface.
As shown in fig. 1, the scene interaction method includes:
in step S11, when the input interface of the first scenario receives input information, corresponding processing is performed on the input information according to the category of the input information, so as to obtain a processing result;
the first scene is any scene deployed in the service process, and the input information includes at least one of access information of a terminal, device state information of the terminal, external data information, first event information and first alarm information;
in step S12, in a case where output information is included in the processing result, the output information is output through the output interface of the first scene.
For example, at least one service process may be run in the server, and the service process may be, for example, a digital twin service process for running a digital scene corresponding to the real scene, so as to implement digitization of the real scene. It should be understood that the service process may be other types of service processes, and the disclosure is not limited thereto.
In one possible implementation, at least one scene is deployed in the service process, including a digital scene (also referred to as a virtual scene) corresponding to a real scene, for example, an intelligent elevator scene corresponding to a real elevator scene, a doorsill security scene, and the like.
In a possible implementation manner, for the whole application scene managed by the server, a plurality of scenes can be respectively defined, so that the automatic information transfer with each sensor and equipment in the actual physical space is realized, and the aims of state perception, real-time acquisition and accurate execution are realized.
The physical equipment (terminal) corresponding to each scene in the digital twin service process is subjected to data uploading control, command issuing control and the like through multilayer mapping of the scenes, and is packaged layer by layer in the scenes, so that the upper layer is simplified and easy to display and understand, the lower layer has more details and is easy to check problems, and efficient digital management is realized.
In one possible implementation, when a user creates a new scene, the system may uniquely assign a scene identification SID (resource identifier) to the scene.
In a possible implementation manner, the user may configure the scene and determine scene configuration information of the scene. The scene configuration information includes device organization information and scene basic information. The device organization information includes a device type of the terminal corresponding to the scene, commands receivable by the device, device functions, interface parameters of the device, and the like. For example, in an intelligent elevator scene, the intelligent elevator scene comprises terminals such as an elevator, a camera and a voice acquisition device; the elevator has the functions of transporting people and/or objects, and the receivable commands comprise opening of an elevator door, closing of the elevator door, going to a set floor and the like; the system is accessed through wired/wireless connection.
In a possible implementation manner, the scene basic information includes an input interface, an output interface, a scene identifier, a scene type, a scene name, scene function information, and the like of the scene. Wherein, the scene identifier is the SID allocated by the system for the scene; scene names such as intelligent elevator scenes and doorsill security scenes; scene types such as doorsill management, cell/area management, and the like; the scene function information is the function realized by the scene, such as face recognition.
In one possible implementation, the input interface is used to indicate the source (e.g., source address), receiving mode, receiving parameter, etc. of information that can be received by the scenario, so that the scenario can receive information through the input interface, such as access information of the terminal, device status information of the terminal, external data information, events/alarms sent by other scenarios, etc.
In one possible implementation, the input interface is used to indicate to which destinations (e.g., destination addresses), transmission modes, transmission parameters the scenario may send information, such as events/alarms generated by the scenario, processing results generated, events/alarms forwarded, etc., through the output interface.
The content of the device organization information and the scene basic information, the types and the limiting modes of the input interface and the output interface, and the like are not limited in the disclosure.
In one possible implementation, the user may associate the scenario with other scenarios, for example, previously created smart elevator scenarios a11 and a12, respectively corresponding to two elevators in a building, and may associate the building management scenario a1 of the building with the smart elevator scenarios a11 and a12 as the upper-level scenario of the smart elevator scenarios a11 and a 12. Thereafter, an area management scene a may also be created, associating the area management scene a with the building management scene a1 as an upper scene of the building management scene a 1.
In a possible implementation manner, after the association, the scene organization information of the scene may be obtained, which is used to indicate the association relationship between the scene and other scenes, that is, the relationship information of the upper and lower layers of the scene, including the lower layer of the scene (or called downstream scene), the upper layer of the scene (or called upstream scene), and the like. The upper layer scene may be associated with a plurality of lower layer scenes, and each scene is indexed by a scene identification SID.
In a possible implementation manner, the scene configuration information (including the device organization information and the scene basic information), the scene organization information, and other definition information may be stored in a configuration database of the server, which is not limited in this disclosure.
After the creation of the scene is completed, the scene can be deployed to a service process to be run. Each service process can run one scene or a plurality of scenes of a certain class, and the number and the arrangement mode of the scenes in the service process are not limited by the disclosure.
For any one scenario deployed in any service process (which may be referred to as a first scenario), interaction with other devices/scenarios may be performed through the input interface and the output interface of the first scenario.
In step S11, when the input interface of the first scenario receives the input information, corresponding processing is performed on the input information according to the type of the input information, and a processing result is obtained. The input information includes at least one of access information of the terminal, device state information of the terminal, external data information, first event information and first alarm information. In step S12, in the case where the output information is included in the processing result, the output is the output information through the output interface of the first scene, that is, the output information is output to the external device or other scenes.
In one possible implementation, the input information includes access information of the terminal. The terminals such as sensors and devices in the actual physical space can be connected to the platform according to the connection requirements of the platform (such as an internet of things platform applied to a cloud server). The platform acquires a device identifier DID, geographical location information, device basis information, and the like.
In one possible implementation, the first scenario may detect, through the input interface, whether a new physical device (terminal) corresponding to the first scenario is accessed. If a new terminal is accessed, the access information of the terminal is received through the input interface. In this case, the first scenario may perform one-to-one handshake confirmation on configuration information of the terminal, such as a device identifier DID of the terminal, geographic location information, device basis information, alarm category, real-time requirement information, acquisition interval information, and the like, to obtain a configuration confirmation result, so as to interact with the terminal in subsequent processing.
In one possible implementation, the input information includes device state information of the terminal. For an accessed terminal, a first scenario may acquire device state information of the terminal through an input interface. And judging whether the terminal is in an abnormal state according to the equipment state information, such as whether the equipment temperature is in a normal range, whether a camera normally collects images and the like. If the terminal is in an abnormal state, corresponding warning information can be generated.
In a possible implementation manner, the first scene may complete output at this layer through an output device (e.g., a display device, a database, etc.) corresponding to the first scene according to a predefined output form, for example, output alarm information to a display interface of the display device, write in the database for storage, etc.
In a possible implementation manner, if the alarm information needs to be output to an external device or other scenes, the first scene may transmit the alarm information to the external device or upper scenes, etc. through the output interface in step S12 according to a predefined output form.
In one possible implementation, the input information includes external data information, such as data collected by a terminal (e.g., images collected by a camera, voice collected by a microphone), data input by an internet of things platform, data input by a human, data input by other scenes, and the like. When the first scene receives external data information through the input interface, the data information can be processed according to scene configuration information of the first scene, and second event information corresponding to the data information is obtained.
For example, in an entrance guard scene, the scene function included in the scene configuration information is face recognition, and whether to open the entrance guard is determined according to the recognition result. When a first scene receives a face image collected by a camera through an input interface, the face image can be subjected to face recognition, and personnel information can be automatically matched; and if the person information in the face library is matched, determining that the corresponding second event information is normal person, and allowing entry.
In this case, on one hand, output can be completed on the local layer through the output device corresponding to the entrance guard scene, for example, the entrance guard is controlled to be opened, the history of entrance of the person is stored in the database, and the entrance of the person is displayed on the display interface of the entrance guard device; on the other hand, in step S12, according to the scene organization information of the entrance scenario, the second event information may be output through an output interface of the entrance scenario, for example, the second event information may be sent to an associated upper-level scenario (e.g., an attendance management scenario, a building management scenario, etc.) through the output interface.
In one possible implementation, the input information includes first event information and/or first alarm information, such as event information and/or alarm information of the terminal, event information and/or alarm information of other scenarios (which may be referred to as second scenarios), and the like. When a first scene receives first event information and/or first warning information through an input interface, an output mode of the first event information and/or the first warning information, including scene internal output (finishing output at the current layer) or scene external output (outputting to other scenes), can be determined according to scene configuration information and scene organization information of the first scene.
In a possible implementation manner, if the output manner is scene internal output, the scene internal output is directly output to a display interface of display equipment, written into a database for storage and the like; if the output mode is the scene external output, in step S12, the output is output to the associated upper layer scene through the output interface of the first scene.
In a possible implementation manner, the first scenario may also process the first event information and/or the first warning information and then output the processed first event information and/or the first warning information, which is not limited in this disclosure.
According to the embodiment of the disclosure, a scene is deployed in a service process, the scene comprises an input interface and an output interface, the scene can receive input information through the input interface, and corresponding processing is executed according to the type of the input information to obtain a processing result; when the processing result comprises the output information, the output information is output through the output interface, so that the data stream flows according to the organization form defined by the scene, and the efficient digital management is realized.
A scene interaction method according to an embodiment of the present disclosure is explained below.
As described above, the entire application scene may be divided into a plurality of scenes, so as to realize automatic information transfer with each sensor and device in the actual physical space. Before performing steps S11-S12, various scenes may be created.
In a possible implementation manner, the scene interaction method according to the embodiment of the present disclosure may further include:
responding to scene creating operation, and distributing a scene identification for a first scene to be created;
in response to a configuration operation on the first scene, determining scene configuration information of the first scene, wherein the scene configuration information comprises device organization information and scene basic information of the first scene;
in response to an association operation between the first scene and a second scene, determining scene organization information of the first scene, wherein the scene organization information indicates an association relationship between the first scene and the second scene;
deploying the first scene into a preset service process;
the device organization information comprises at least one of a device type of a terminal corresponding to the first scene, a command receivable by a device, a device function and an interface parameter of the device; the scene basic information includes at least one of an input interface, an output interface, a scene identifier, a scene type, a scene name, and scene function information of the first scene.
For example, a scene creation function may be provided in the server, and a user (e.g., a developer of the digital twin service process, a maintainer, etc.) may log in to the server through the client to perform scene creation.
In one possible implementation, when a user wants to create a new scene, the user may perform a scene creation operation, for example, click a corresponding scene creation control. The server, in response to the scene creation operation, may uniquely assign a scene identification SID to the scene to be created (referred to as a first scene).
In a possible implementation manner, a user may perform configuration operation on the first scene, for example, select a terminal corresponding to the first scene, set interface parameters between the terminal and the terminal, set a command that the terminal can receive, set data acquisition parameters of the terminal, and the like, to implement configuration of the device organization information; further, for example, scene names, scene types, parameters of the input interface and the output interface, and the like are set, and the basic scene information is configured.
In a possible implementation manner, the server, in response to a configuration operation of a user, may determine scene configuration information of a first scene, including device organization information and scene basic information, where the device organization information includes at least one of a device type of a terminal corresponding to the first scene, a command receivable by a device, a device function, and an interface parameter of the device; the scene basic information includes at least one of an input interface, an output interface, a scene identifier, a scene type, a scene name, and scene function information of the first scene. The present disclosure is not so limited.
In one possible implementation, the user may associate the first scene with other scenes (referred to as second scenes). For example, if smart elevator scenes a11 and a12 were created and correspond to two elevators in a building, respectively, the building management scene a1 of the building can be associated with the smart elevator scenes a11 and a12 as the upper-floor scenes of the smart elevator scenes a11 and a 12. Thereafter, an area management scene a may also be created, associating the area management scene a with a building management scene a1, the building management scene a1 being the underlying scene of the area management scene a.
In a possible implementation manner, the server determines, in response to a scene association operation of a user, scene organization information of a first scene, where the scene organization information indicates an association relationship between the first scene and the second scene, that is, relationship information of upper and lower layers of a scene, including a lower layer scene, an upper layer scene, and the like of the scene. The upper layer scene may be associated with a plurality of lower layer scenes, and each scene is indexed by a scene identification SID.
It should be understood that the user may also perform other setting operations on the first scenario, which is not limited by this disclosure.
In a possible implementation manner, the scene configuration information (including the device organization information and the scene basic information), the scene organization information, and other definition information may be stored in a configuration database of the server, which is not limited in this disclosure.
By the method, the creation and organization processes of the scenes can be realized, so that data can be transmitted between the scenes and the terminals and between the scenes through the predefined organization relation, and efficient digital management is realized.
After the creation of the scene is completed, the scene can be deployed to a service process to be run. Each service process can run one scene or a plurality of scenes of a certain class, and the number and the arrangement mode of the scenes in the service process are not limited by the disclosure.
For any one scenario (first scenario) deployed in any service process, the processing procedures of steps S11-S12 may be performed through interaction between the input interface and the output interface of the first scenario and other devices/scenarios.
In one possible implementation, step S11 may include:
confirming the configuration information of the terminal to obtain a configuration confirmation result under the condition that the input information comprises the access information of the terminal,
the configuration information includes at least one of a device identifier of the terminal, geographical location information, device basis information, alarm category, real-time requirement information, and acquisition interval information, and the processing result includes the configuration confirmation result.
For example, each terminal such as a sensor and a device in the actual physical space may be connected to the platform according to the connection requirement of the platform (e.g., an internet of things platform applied to a cloud server). The platform acquires a device identifier DID, geographical location information, device basis information, and the like.
In one possible implementation, the first scenario may detect, through the input interface, whether a new physical device (terminal) corresponding to the first scenario is accessed. If a new terminal is accessed, the access information of the terminal is received through the input interface. In this case, the first scenario may perform handshake confirmation on configuration information of the terminal, for example, device identification DID, geographical location information, device basis information, alarm category, real-time requirement information, acquisition interval information, and the like of the terminal one by one.
In a possible implementation manner, if all the configuration information of the terminal can be confirmed, it can be determined that the configuration confirmation result is that the terminal passes through access, and normal interaction can be performed with the terminal in subsequent processing; if configuration information which cannot be confirmed exists, for example, an interface protocol of the terminal is not supported, corresponding operations may be performed, for example, outputting to an output device corresponding to the first scene (outputting to a display interface for viewing by related personnel), modifying the configuration information of the terminal, and the like, which is not limited by the present disclosure.
By the method, the configuration confirmation process of the new access terminal can be realized, so that the new access terminal can interact with the terminal in the subsequent process, and the terminal can be managed through scenes.
In one possible implementation, step S11 may include:
under the condition that the input information comprises equipment state information of a terminal, judging whether the terminal is in an abnormal state or not according to the equipment state information;
generating second alarm information under the condition that the terminal is in an abnormal state, wherein the processing result comprises the second alarm information;
outputting the second warning information in the first scene through the output device of the first scene;
accordingly, step S12 may include: and outputting the second warning information through an output interface of the first scene according to the scene organization information of the first scene, wherein the scene organization information indicates the incidence relation between the first scene and other scenes.
For example, for an accessed terminal, a first scenario may obtain device state information of the terminal through an input interface. And judging whether the terminal is in an abnormal state according to the equipment state information, such as whether the equipment temperature is in a normal range, whether a camera normally collects images and the like. If the terminal is in an abnormal state, corresponding alarm information (referred to as second alarm information) may be generated.
In a possible implementation manner, the first scene may complete output at this layer through an output device (e.g., a display device, a database, etc.) corresponding to the first scene according to a predefined output form, for example, output alarm information to a display interface of the display device, write in the database for storage, etc.
In a possible implementation manner, according to the scene organization information of the first scene, it may be determined whether the second warning information needs to be output to an external device or other scenes. If the second alarm information needs to be output, the first scenario may organize the information according to the scenario, transmit the second alarm information to an external device or other scenarios through the output interface in step S12 according to the data transmission requirement of the output interface, and so on.
For example, in an intelligent elevator scene, if it is detected that an elevator door is in an abnormal opening state or the elevator is in an overweight state, corresponding warning information can be generated. On one hand, alarm information is output through output equipment corresponding to an intelligent elevator scene, such as a loudspeaker, a display screen and the like in the elevator, alarm sound is sent, alarm voice broadcasting is carried out, alarm content is displayed and the like; on the other hand, according to the scene organization information of the intelligent elevator scene, the alarm information is sent to an upper-layer scene of the intelligent elevator scene, such as a building management scene, through the output interface.
In this way, management of device state information of the terminal can be achieved through the scene.
In one possible implementation, step S11 may include:
under the condition that the input information comprises external data information, processing the data information according to the data information and scene configuration information of the first scene to obtain second event information corresponding to the data information, wherein the processing result comprises the second event information;
outputting, by an output device of the first scene, the second event information in the first scene;
accordingly, step S12 may include: and outputting the second event information through an output interface of the first scene according to scene organization information of the first scene, wherein the scene organization information indicates an incidence relation between the first scene and the second scene.
For example, the input information includes external data information, such as data collected by a terminal (e.g., images collected by a camera, voice collected by a microphone), data input by an internet of things platform, data input by a human, data input by other scenes, and the like. When the external data information is received through the input interface in the first scene, all data can be automatically stored according to the setting and the time sequence. And, the data information may be processed according to the scene configuration information of the first scene to obtain second event information corresponding to the data information.
For example, in an entrance guard scene, the scene function included in the scene configuration information is face recognition, and whether to open the entrance guard is determined according to the recognition result. When a first scene receives a face image collected by a camera through an input interface, the face image can be subjected to face recognition, and personnel information can be automatically matched; and if the person information in the face library is matched, determining that the corresponding second event information is normal person, and allowing entry.
In this case, on one hand, output can be completed on the local layer through the output device corresponding to the entrance guard scene, for example, the entrance guard is controlled to be opened, the history of entrance of the person is stored in the database, and the entrance of the person is displayed on the display interface of the entrance guard device; on the other hand, in step S12, according to the scene organization information of the entrance scenario, the second event information may be output through an output interface of the entrance scenario, for example, the second event information may be sent to an associated upper-level scenario (e.g., an attendance management scenario, a building management scenario, etc.) through the output interface.
In an example, if the person information in the face library is not matched, the corresponding second event information can be determined to be abnormal persons, and entry is prohibited. In this case, on one hand, output can be completed on the local layer through the output device corresponding to the access control scene, for example, a history of abnormal access of the person is saved in a database, and the person is prohibited from entering the access control device; on the other hand, in step S12, according to the scene organization information of the entrance scenario, the second event information may be output through an output interface of the entrance scenario, for example, the second event information may be sent to an associated upper-level scenario (e.g., an attendance management scenario, a building management scenario, etc.) through the output interface.
In one possible implementation, the input interface of the upper layer scene may be smaller than the output interface of the lower layer scene, that is, the upper layer scene may discard some data. That is, when the first scene receives data input from the lower scene, some data may be discarded according to the scene configuration information. For example, if the frame rate of the input video stream is high and the data amount is large, and the processing of the first scene does not need so much data to complete the corresponding function, only a part of the video stream data may be selected for processing, and the useless video stream data may be discarded.
In this way, management of external data information can be achieved through the scene.
In one possible implementation, step S11 may include:
determining an output mode of the first event information and/or the first warning information according to scene configuration information and scene organization information of the first scene under the condition that the input information comprises the first event information and/or the first warning information, wherein the output mode comprises scene internal output or scene external output;
accordingly, step S12 may include:
and under the condition that the output mode is scene external output, outputting the first event information and/or the first alarm information through an output interface of the first scene.
For example, the input information includes first event information and/or first alarm information, such as event information and/or alarm information of the terminal, event information and/or alarm information of other scenes (second scenes), and the like. When a first scene receives first event information and/or first warning information through an input interface, an output mode of the first event information and/or the first warning information, including scene internal output (finishing output at the current layer) and/or scene external output (outputting to other scenes), can be determined according to scene configuration information and scene organization information of the first scene.
In a possible implementation manner, if the output manner is scene internal output, the scene internal output is directly output to a display interface of display equipment, written into a database for storage and the like; if the output mode is the scene external output, in step S12, the output is output to the associated upper layer scene through the output interface of the first scene.
In a possible implementation manner, the first scene may also process and output the first event information and/or the first warning information according to the scene configuration information, which is not limited in this disclosure.
By the method, the management of the event information and the alarm information can be realized through the scene.
In a possible implementation, the first scenario further includes a command input interface and a command output interface, which are respectively used for receiving and sending commands.
In a possible implementation manner, the process of command issuing and alarm uploading may be opposite, one scene command is sent to a certain target scene, and the target scene may issue the command to the sub-scene and/or the device after inquiring the sub-scene and the device under the target scene. The commands which are not concerned by the equipment are ignored and are not really sent to the equipment. The target scene inquires a command concerned by the sub-scene and sends the command to the sub-scene; and the child scene continuously queries the equipment and the grandchild scene under the child scene, sequentially checks, and finally sends the command to be issued to the Internet of things platform, and the Internet of things platform sends the command to the actual processing equipment.
In one possible implementation manner, the scene interaction method according to the embodiment of the present disclosure further includes:
under the condition that a command input interface of the first scene receives command information, determining whether the command information is command information aiming at the first scene according to scene configuration information of the first scene;
and if the command information is command information for the first scene, sending the command information to a terminal associated with the first scene so as to enable the terminal to execute the command information.
For example, the first scenario may be further provided with a command input interface and a command output interface for receiving and sending command information. The command input interface can be a part of the input interface or an independent interface; likewise, the command output interface may be a part of the output interface or may be a separate interface, which is not limited by this disclosure.
In one possible implementation, when command information is sent to the first scenario due to a business requirement, the command information may be sent to a command input interface of the first scenario of the service process. The command input interface of the first scene may determine, after receiving the command information from the command interface address, whether the command information is command information for the first scene according to scene configuration information of the first scene, for example, whether a scene identifier corresponding to the command information is a scene identifier of the first scene.
In one possible implementation, if the command information is command information for the first scenario, the terminal that needs to process the commands may be retrieved and it may be determined whether the terminal has a corresponding interface. And under the condition that the terminal has a corresponding interface, forwarding the command information to the interface of the terminal according to the requirement defined by the interface. The interface may also be a command issuing interface of the internet of things platform, or even some output display device configured in a scene, and the like, which is not limited in the present disclosure.
For example, in an intelligent elevator scenario, the command information received from the command input interface includes: and controlling the elevator to move to the target floor, and then sending the command information to the corresponding elevator by the intelligent elevator scene so that the elevator moves to the target floor, and opening the elevator door after reaching the target floor. The present disclosure is not limited to the specific content of the command information.
In a possible implementation manner, the first scenario may also send the command information to a certain service executor according to the scenario configuration information, which is not limited by this disclosure.
In this way, the reception and execution of commands can be achieved by the scenario.
In one possible implementation manner, the scene interaction method according to the embodiment of the present disclosure further includes:
under the condition that a command input interface of the first scene receives command information, determining whether the command information meets a forwarding condition according to scene organization information of the first scene;
and outputting the command information through a command output interface of the first scene under the condition that the command information meets a forwarding condition.
For example, the command input interface of the first scenario may determine whether the command information satisfies the forwarding condition according to the scenario organization information of the first scenario after receiving the command information from the command interface address. The forwarding condition includes, for example, that the command information is a command for an upper scene of the first scene, that the command information specifies a plurality of scene executions, and the like.
In one possible implementation, if the command information satisfies the forwarding condition, the command information may be transmitted to command input interfaces of other scenes through a command output interface of the first scene based on scene organization information of the first scene. Generally speaking, it is conducted from the upper layer scene to the lower layer scene, and it is not excluded that the command of the lower layer scene is sent to the upper layer scene for execution. The present disclosure is not limited to a particular manner of conducting the commands.
In this way, forwarding and conducting of commands can be achieved through the scenario.
The following description will be given taking an intelligent elevator scene in an intelligent city as an example.
The intelligent device of the elevator can be added to a door safety scene and an intelligent elevator scene. Background managers can respectively create a doorsill security scene and an intelligent elevator scene. Configuring an alarm event, a command interface address and parameters, and distributing the alarm and the command to corresponding scenes.
In an example, upper-level scenes of intelligent elevator scenes, such as a doorsill security scene and a residential building intelligent elevator scene, which may receive an elevator call alarm, a personnel boarding event, an elevator abnormal state alarm and the like, can be set.
In an example, when the elevator starts to be put into use, the elevator hall security scene and the intelligent elevator scene can both receive access information of the elevator, and configuration information such as equipment identification, geographical position information, equipment basic information, alarm category, real-time requirement information, acquisition interval information and the like of the elevator are respectively confirmed by holding hands, so that the elevator is respectively corresponding to the elevator hall security scene and the intelligent elevator scene.
In an example, during the operation of the elevator, the state information of the equipment, such as the state of the elevator door, the operation state of the elevator, the acceleration data, etc., of the equipment is reported. The intelligent elevator scene obtains the equipment state information through the input interface, if the elevator is judged to have an abnormal state, for example, the elevator door is in an abnormal opening state or the elevator is in an overweight state, the scene generates alarm information, and the alarm information is sent to external equipment or other scenes through the output interface. The upper-level scenes corresponding to the intelligent elevator scenes, such as a doorsill safety scene, a residential building intelligent elevator scene and the like, can receive the alarm information.
In an example, when a person is facing an elevator within the elevator, then a face scan event may be generated within the elevator. The elevator can send the collected images, face images or face detection results to an intelligent elevator scene; the intelligent elevator scene obtains the data of the face scanning event through the input interface, and executes corresponding processing, such as face recognition and automatic matching of personnel information. If a resident in the face library is matched, the face can be determined to be the resident. The intelligent elevator scene can send commands to control the elevator to automatically transport the resident to the floor where the resident is located or to the first floor.
In an example, if there is no match to a resident in a face library, or to an abnormal people library (e.g., a pre-theft department library), alert information may be generated and sent. The intelligent elevator scene can send the alarm information to the doorsill safety scene through the output interface so as to process the doorsill safety scene. A doorhouse security scenario may inform security personnel to go to an elevator, or alarm processing, etc.
In an example, the intelligent elevator scene can also send the alarm information to a database for storage, and send the alarm information to the elevator so as to enable the elevator to perform corresponding operations, such as voice broadcasting of the alarm information and the like.
By the method, digital management of various application scenes related to the elevator can be realized through the intelligent elevator scene and the upper scene thereof, and the efficiency of digital management is improved.
Fig. 2 shows a schematic diagram of a scene interaction process according to an embodiment of the present disclosure. As shown in fig. 2, a scene 1, a scene 2, and a scene 3 are deployed in the digital twin service process, where the scene 3 is an upper-layer scene of the scene 1 and the scene 2, and each scene is provided with an input interface, an output interface, a command output interface, and a command input interface (some interfaces are not shown). The scene 1 and the scene 2 are connected to the internet of things platform through an input interface and an output interface respectively, and are connected with terminals (intelligent devices, sensors and the like) in a real scene through the internet of things platform.
As shown in fig. 2, in an example, scenario 1 may generate an event/alarm, for example, when it is determined that the terminal is in an abnormal state according to the acquired device state information, the event/alarm may be generated and sent to a data lake (which may also be referred to as a database/data warehouse) through an output interface for storage, and sent to an input interface of scenario 3 through an output interface.
In an example, scenario 2 may receive an event/alert sent by the internet of things platform through an input interface. After the analysis processing, on one hand, the received event/alarm is sent to the data lake through the output interface for storage, and on the other hand, the received event/alarm is sent to the input interface of the scene 3 through the output interface.
In an example, scenario 3 may receive events/alarms from scenario 1, scenario 2, or from a data lake through an input interface, and after analysis processing, send the events/alarms to an application layer/presentation layer through an output interface, or other scenarios (not shown), for example, send event/alarm information to related personnel through the application layer; event/alarm information is displayed on a corresponding display interface through the display layer, and the event/alarm information is transmitted to other scenes,
in an example, scenario 3 receives a command from the application layer/presentation layer through a command input interface; executing and/or forwarding the command according to the scene configuration information and the scene organization information; when the command needs to be forwarded, the command is forwarded to the corresponding scene through the command output interface. In fig. 2, the command is transmitted to the command input interface of scene 1.
In an example, after receiving the command through the command input interface, the scene 1 executes and/or forwards the command according to the scene configuration information and the scene organization information; when the command needs to be executed, the command output interface can send a command to the internet of things platform, so that the internet of things platform executes the command, or sends the command to the terminal corresponding to the scene 1 for execution.
In this way, efficient digital management can be achieved, and the data lake can be made to store only data meaningful to actual business, making dirty data less.
According to the scene interaction method disclosed by the embodiment of the invention, the digital management of various application scenes can be realized through a service process according to a scene definition mode; the scene indexing and reference can be realized through the uniformly defined scene identification, after the reference, the scene can access each device according to the interface description, and the data is reported according to the scene organization information.
The scene interaction method can be applied to application scenes such as smart cities, digital management and operation and maintenance are carried out through a digital twin service process, data can be transmitted between scenes and equipment and between scenes through predefined interfaces and organization relations, and management efficiency and operation and maintenance efficiency are improved. In addition, the background can dynamically start or stop various scenes, can map each scene or key information of the scene to a virtual map according to geographic information, and can also map the scene after a 3D effect map is made, so that the convenience and intuition of management and operation and maintenance are improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a scene interaction apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any scene interaction method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 3 shows a block diagram of a scene interaction device according to an embodiment of the present disclosure. The device is applied to a server, at least one service process runs in the server, at least one scene is deployed in each service process, and each scene comprises an input interface and an output interface. As shown in fig. 3, the apparatus includes:
the information processing module 31 is configured to, when an input interface of a first scene receives input information, perform corresponding processing on the input information according to a category of the input information to obtain a processing result; the first scene is any scene deployed in the service process, and the input information includes at least one of access information of a terminal, device state information of the terminal, external data information, first event information and first alarm information;
and an information output module 32, configured to output the output information through an output interface of the first scene when the processing result includes the output information.
In one possible implementation manner, the information processing module is configured to: and confirming the configuration information of the terminal under the condition that the input information comprises access information of the terminal to obtain a configuration confirmation result, wherein the configuration information comprises at least one of equipment identification, geographical position information, equipment basic information, alarm category, real-time requirement information and acquisition interval information of the terminal, and the processing result comprises the configuration confirmation result.
In one possible implementation manner, the information processing module is configured to: under the condition that the input information comprises equipment state information of a terminal, judging whether the terminal is in an abnormal state or not according to the equipment state information; generating second alarm information under the condition that the terminal is in an abnormal state, wherein the processing result comprises the second alarm information; outputting the second warning information in the first scene through the output device of the first scene;
wherein the information output module is configured to: and outputting the second warning information through an output interface of the first scene according to the scene organization information of the first scene, wherein the scene organization information indicates the incidence relation between the first scene and other scenes.
In one possible implementation manner, the information processing module is configured to: under the condition that the input information comprises external data information, processing the data information according to the data information and scene configuration information of the first scene to obtain second event information corresponding to the data information, wherein the processing result comprises the second event information; outputting, by an output device of the first scene, the second event information in the first scene; wherein the information output module is configured to: and outputting the second event information through an output interface of the first scene according to scene organization information of the first scene, wherein the scene organization information indicates an incidence relation between the first scene and the second scene.
In a possible implementation manner, the information output module is configured to: determining an output mode of the first event information and/or the first warning information according to scene configuration information and scene organization information of the first scene under the condition that the input information comprises the first event information and/or the first warning information, wherein the output mode comprises scene internal output or scene external output; wherein the information output module is configured to: and under the condition that the output mode is scene external output, outputting the first event information and/or the first alarm information through an output interface of the first scene.
In one possible implementation, the first scenario further includes a command input interface and a command output interface, and the apparatus further includes: the command information determining module is used for determining whether the command information is command information aiming at the first scene according to scene configuration information of the first scene under the condition that a command input interface of the first scene receives the command information; and the command executing module is used for sending the command information to a terminal associated with the first scene to enable the terminal to execute the command information when the command information is the command information aiming at the first scene.
In one possible implementation, the apparatus further includes: the condition determining module is used for determining whether the command information meets forwarding conditions according to scene organization information of the first scene under the condition that a command input interface of the first scene receives the command information; and the command output module is used for outputting the command information through a command output interface of the first scene under the condition that the command information meets the forwarding condition.
In one possible implementation, the apparatus further includes: the scene identification distribution module is used for responding to scene creation operation and distributing scene identifications to a first scene to be created; a scene configuration module, configured to determine scene configuration information of the first scene in response to a configuration operation on the first scene, where the scene configuration information includes device organization information and scene basic information of the first scene; a scene organization module, configured to determine scene organization information of the first scene in response to an association operation between the first scene and a second scene, where the scene organization information indicates an association relationship between the first scene and the second scene; the scene deployment module is used for deploying the first scene into a preset service process; the device organization information comprises at least one of a device type of a terminal corresponding to the first scene, a command receivable by a device, a device function and an interface parameter of the device; the scene basic information includes at least one of an input interface, an output interface, a scene identifier, a scene type, a scene name, and scene function information of the first scene.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 4, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 5 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 5, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A scene interaction method is applied to a server, at least one service process runs in the server, at least one scene is deployed in each service process, each scene comprises an input interface and an output interface, and the method comprises the following steps:
under the condition that an input interface of a first scene receives input information, executing corresponding processing on the input information according to the category of the input information to obtain a processing result;
the first scene is any scene deployed in the service process, and the input information includes at least one of access information of a terminal, device state information of the terminal, external data information, first event information and first alarm information;
and outputting the output information through an output interface of the first scene under the condition that the processing result comprises the output information.
2. The method according to claim 1, wherein the performing corresponding processing on the input information according to the category of the input information to obtain a processing result comprises:
confirming the configuration information of the terminal to obtain a configuration confirmation result under the condition that the input information comprises the access information of the terminal,
the configuration information includes at least one of a device identifier of the terminal, geographical location information, device basis information, alarm category, real-time requirement information, and acquisition interval information, and the processing result includes the configuration confirmation result.
3. The method according to claim 1, wherein the performing corresponding processing on the input information according to the category of the input information to obtain a processing result comprises:
under the condition that the input information comprises equipment state information of a terminal, judging whether the terminal is in an abnormal state or not according to the equipment state information;
generating second alarm information under the condition that the terminal is in an abnormal state, wherein the processing result comprises the second alarm information;
outputting the second warning information in the first scene through the output device of the first scene;
wherein, under the condition that the processing result includes output information, outputting the output information through the output interface of the first scene includes:
and outputting the second warning information through an output interface of the first scene according to the scene organization information of the first scene, wherein the scene organization information indicates the incidence relation between the first scene and other scenes.
4. The method according to claim 1, wherein the performing corresponding processing on the input information according to the category of the input information to obtain a processing result comprises:
under the condition that the input information comprises external data information, processing the data information according to the data information and scene configuration information of the first scene to obtain second event information corresponding to the data information, wherein the processing result comprises the second event information;
outputting, by an output device of the first scene, the second event information in the first scene;
wherein, under the condition that the processing result includes output information, outputting the output information through the output interface of the first scene includes:
and outputting the second event information through an output interface of the first scene according to scene organization information of the first scene, wherein the scene organization information indicates an incidence relation between the first scene and the second scene.
5. The method according to claim 1, wherein the performing corresponding processing on the input information according to the category of the input information to obtain a processing result comprises:
determining an output mode of the first event information and/or the first warning information according to scene configuration information and scene organization information of the first scene under the condition that the input information comprises the first event information and/or the first warning information, wherein the output mode comprises scene internal output or scene external output;
wherein, under the condition that the processing result includes output information, outputting the output information through the output interface of the first scene includes:
and under the condition that the output mode is scene external output, outputting the first event information and/or the first alarm information through an output interface of the first scene.
6. The method of claim 1, wherein the first scenario further comprises a command input interface and a command output interface, the method further comprising:
under the condition that a command input interface of the first scene receives command information, determining whether the command information is command information aiming at the first scene according to scene configuration information of the first scene;
and if the command information is command information for the first scene, sending the command information to a terminal associated with the first scene so as to enable the terminal to execute the command information.
7. The method of claim 6, further comprising:
under the condition that a command input interface of the first scene receives command information, determining whether the command information meets a forwarding condition according to scene organization information of the first scene;
and outputting the command information through a command output interface of the first scene under the condition that the command information meets a forwarding condition.
8. The method of claim 1, further comprising:
responding to scene creating operation, and distributing a scene identification for a first scene to be created;
in response to a configuration operation on the first scene, determining scene configuration information of the first scene, wherein the scene configuration information comprises device organization information and scene basic information of the first scene;
in response to an association operation between the first scene and a second scene, determining scene organization information of the first scene, wherein the scene organization information indicates an association relationship between the first scene and the second scene;
deploying the first scene into a preset service process;
the device organization information comprises at least one of a device type of a terminal corresponding to the first scene, a command receivable by a device, a device function and an interface parameter of the device; the scene basic information includes at least one of an input interface, an output interface, a scene identifier, a scene type, a scene name, and scene function information of the first scene.
9. A scene interaction device is applied to a server, at least one service process runs in the server, at least one scene is deployed in each service process, each scene comprises an input interface and an output interface, and the device comprises:
the information processing module is used for executing corresponding processing on the input information according to the category of the input information under the condition that the input information is received by the input interface of the first scene to obtain a processing result;
the first scene is any scene deployed in the service process, and the input information includes at least one of access information of a terminal, device state information of the terminal, external data information, first event information and first alarm information;
and the information output module is used for outputting the output information through the output interface of the first scene under the condition that the processing result comprises the output information.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 8.
11. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN202111020597.2A 2021-09-01 2021-09-01 Scene interaction method and device, electronic equipment and storage medium Pending CN113741910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111020597.2A CN113741910A (en) 2021-09-01 2021-09-01 Scene interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111020597.2A CN113741910A (en) 2021-09-01 2021-09-01 Scene interaction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113741910A true CN113741910A (en) 2021-12-03

Family

ID=78734654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111020597.2A Pending CN113741910A (en) 2021-09-01 2021-09-01 Scene interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113741910A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619138A (en) * 2022-10-10 2023-01-17 卓思韦尔(北京)信息技术有限公司 Building management and control operation method, device, equipment and medium based on digital twin

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105278986A (en) * 2015-09-30 2016-01-27 小米科技有限责任公司 Control method and apparatus of electronic device
CN108628720A (en) * 2018-05-02 2018-10-09 济南浪潮高新科技投资发展有限公司 Equipment monitoring system and method under a kind of cascade scene
CN111611702A (en) * 2020-05-15 2020-09-01 深圳星地孪生科技有限公司 Digital twin scene creation method, apparatus, device and storage medium
CN112053085A (en) * 2020-09-16 2020-12-08 四川大学 Airport scene operation management system and method based on digital twin
CN112131782A (en) * 2020-08-27 2020-12-25 浙江大学 Multi-loop intelligent factory edge side digital twin scene coupling device
EP3798747A1 (en) * 2019-09-26 2021-03-31 Siemens Aktiengesellschaft Controlling a machine based on an online digital twin
CN112818446A (en) * 2021-01-26 2021-05-18 西安交通大学 Construction method of intelligent workshop digital twin system
CN112904811A (en) * 2021-01-14 2021-06-04 厦门汇利伟业科技有限公司 Multi-device cooperative operation system and method based on digital twin technology
CN113064351A (en) * 2021-03-26 2021-07-02 京东数字科技控股股份有限公司 Digital twin model construction method and device, storage medium and electronic equipment
CN113098892A (en) * 2021-04-19 2021-07-09 恒安嘉新(北京)科技股份公司 Data leakage prevention system and method based on industrial Internet
CN113093578A (en) * 2021-04-09 2021-07-09 上海商汤智能科技有限公司 Control method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105278986A (en) * 2015-09-30 2016-01-27 小米科技有限责任公司 Control method and apparatus of electronic device
CN108628720A (en) * 2018-05-02 2018-10-09 济南浪潮高新科技投资发展有限公司 Equipment monitoring system and method under a kind of cascade scene
EP3798747A1 (en) * 2019-09-26 2021-03-31 Siemens Aktiengesellschaft Controlling a machine based on an online digital twin
CN111611702A (en) * 2020-05-15 2020-09-01 深圳星地孪生科技有限公司 Digital twin scene creation method, apparatus, device and storage medium
CN112131782A (en) * 2020-08-27 2020-12-25 浙江大学 Multi-loop intelligent factory edge side digital twin scene coupling device
CN112053085A (en) * 2020-09-16 2020-12-08 四川大学 Airport scene operation management system and method based on digital twin
CN112904811A (en) * 2021-01-14 2021-06-04 厦门汇利伟业科技有限公司 Multi-device cooperative operation system and method based on digital twin technology
CN112818446A (en) * 2021-01-26 2021-05-18 西安交通大学 Construction method of intelligent workshop digital twin system
CN113064351A (en) * 2021-03-26 2021-07-02 京东数字科技控股股份有限公司 Digital twin model construction method and device, storage medium and electronic equipment
CN113093578A (en) * 2021-04-09 2021-07-09 上海商汤智能科技有限公司 Control method and device, electronic equipment and storage medium
CN113098892A (en) * 2021-04-19 2021-07-09 恒安嘉新(北京)科技股份公司 Data leakage prevention system and method based on industrial Internet

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI, SL ET AL.: "Preliminary Study on the Application of Digital Twin in Military Engineering and Equipment", 《2020 CHINESE AUTOMATION CONGRESS (CAC 2020)》, 31 December 2020 (2020-12-31) *
熊俊臻;孙俊峰;黄滔;王红梅;李昌隆;陈翔;: "基于数字孪生的卷烟厂设备监测平台设计", 计算机与网络, no. 18, 26 September 2020 (2020-09-26) *
赵德宁;李舒涛;吴劲松;廖霄;邓振华;: "基于BIM的数字孪生智慧机房管理系统", 电子技术与软件工程, no. 10, 15 May 2020 (2020-05-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619138A (en) * 2022-10-10 2023-01-17 卓思韦尔(北京)信息技术有限公司 Building management and control operation method, device, equipment and medium based on digital twin
CN115619138B (en) * 2022-10-10 2023-04-28 卓思韦尔(北京)信息技术有限公司 Method, device, equipment and medium for building management and control operation based on digital twin

Similar Documents

Publication Publication Date Title
US10178214B2 (en) Methods and apparatuses for binding with device
EP3113466B1 (en) Method and device for warning
WO2018072741A1 (en) Task management based on instant communication message
RU2641267C2 (en) Method and device for notification of flight, and also method and device for processing information on flight
US9954691B2 (en) Method and apparatus for binding intelligent device
EP3407279A1 (en) Method, apparatus and terminal device for invoking a virtual public transport card
CN110992562A (en) Access control method and device, electronic equipment and storage medium
CN113093578A (en) Control method and device, electronic equipment and storage medium
CN113206781B (en) Client control method, device, equipment and storage medium
US20240064237A1 (en) Real-time crime center solution with dispatch directed digital media payloads
CN112163406A (en) Interactive message display method and device, computer equipment and storage medium
CN113542101B (en) Method and device for sending and responding help seeking information and storage medium
CN111158576A (en) Social relationship establishing method and device based on live broadcast scene and storage medium
CN114519441A (en) Conference room management method, conference room management apparatus, electronic device, storage medium, and program product
CN113741910A (en) Scene interaction method and device, electronic equipment and storage medium
CN110768843B (en) Network problem analysis method, device, terminal and storage medium
CN110426041B (en) Positioning and training method and device of positioning model, electronic equipment and storage medium
US11587416B1 (en) Dynamic video analytics rules based on human conversation
CN113822216A (en) Event detection method, device, system, electronic equipment and storage medium
CN113806779A (en) System authority management method and device, electronic equipment and storage medium
CN108924085B (en) Network scheduling method, device and storage medium
CN112967794A (en) Information notification method, device, equipment and storage medium
CN110837817A (en) Target object identification method, device, equipment and storage medium
CN115776541A (en) Call processing method, device, equipment and storage medium
JP2023053467A (en) Communication support system, communication support method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination