CN113434844A - Intelligent scene building method and device, storage medium and electronic equipment - Google Patents
Intelligent scene building method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN113434844A CN113434844A CN202110701574.1A CN202110701574A CN113434844A CN 113434844 A CN113434844 A CN 113434844A CN 202110701574 A CN202110701574 A CN 202110701574A CN 113434844 A CN113434844 A CN 113434844A
- Authority
- CN
- China
- Prior art keywords
- scene
- information
- target
- characteristic information
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000004048 modification Effects 0.000 claims abstract description 14
- 238000012986 modification Methods 0.000 claims abstract description 14
- 230000015654 memory Effects 0.000 claims description 27
- 230000004044 response Effects 0.000 claims description 22
- 230000005477 standard model Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 4
- 210000001747 pupil Anatomy 0.000 claims description 4
- 238000010276 construction Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 238000006243 chemical reaction Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Tourism & Hospitality (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a method and a device for building an intelligent scene, a storage medium and electronic equipment. Wherein, the method comprises the following steps: acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is recorded in a biological recognition device in advance and is used for describing biological characteristics of a target object; responding to the modification operation of the identification information, and modifying the identification information into a user tag; and responding to a scene creating request of the biological recognition equipment, and building an intelligent scene corresponding to the target characteristic information based on the user label. The invention solves the technical problem that in the prior art, an intelligent scene is built by generally adopting biological characteristic information collected by biological identification equipment, and a user cannot conveniently understand the intelligent scene.
Description
Technical Field
The invention relates to the field of intelligent home furnishing, in particular to a method and a device for building an intelligent scene, a storage medium and electronic equipment.
Background
At present, in the field of smart homes, biometric information collected by biometric identification devices is generally adopted to create an intelligent scene for a user, that is, biometric information identified by the biometric identification devices, such as a fingerprint 1 and a fingerprint 2, is simply relied on, the intelligent scene created based on the biometric information can be an entrance scene when the fingerprint 1 is unlocked, and a television is opened when the fingerprint 2 is unlocked.
However, when the user views the intelligent scene, the user needs to know the membership identities corresponding to the fingerprint 1 and the fingerprint 2 respectively, so that the user can understand the intelligent scene conveniently, and the situation of wrong configuration of the intelligent scene is easy to occur, which results in poor user experience.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for building an intelligent scene, a storage medium and electronic equipment, which are used for at least solving the technical problem that a user is inconvenient to understand the intelligent scene because the intelligent scene is built by generally adopting biological characteristic information collected by biological identification equipment in the prior art.
According to one aspect of the embodiment of the invention, a method for constructing an intelligent scene is provided, which comprises the following steps: acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is recorded in a biological recognition device in advance and is used for describing biological characteristics of a target object; modifying the identification information into a user tag in response to the modification operation of the identification information, wherein the user tag is used for describing the identity characteristics of the target object; and responding to a scene creating request of the biological recognition equipment, and building an intelligent scene corresponding to the target characteristic information based on the user label.
Optionally, after building an intelligent scene corresponding to the target feature information based on the user tag, the method further includes: responding to a scene query request, and acquiring a target scene corresponding to the scene query request from the intelligent scene; and displaying the scene detail information of the target scene, wherein the scene detail information of the target scene is obtained by describing the target scene by adopting the user tag.
Optionally, the obtaining of the identification information corresponding to the target feature information includes: inputting the target feature information into the biometric device in response to an information input operation to the biometric device; establishing a binding relationship between the target characteristic information and the biological identification device; and determining the identification information corresponding to the target characteristic information based on the establishment time and/or the establishment sequence of the binding relationship.
Optionally, responding to a scene creation request to the biometric device, and building an intelligent scene corresponding to the target feature information based on the user tag, includes: responding to the scene creating request, and creating and starting an initial scene of the biological recognition device; and modifying the scene detail information of the initial scene based on the user label, and building the intelligent scene corresponding to the target characteristic information.
Optionally, detecting current identification state information of the biometric identification device, and reporting the identification state information to a security server; acquiring standard model data corresponding to the biological recognition equipment from a network by adopting the security server, and converting the current recognition state information into standard recognition state information based on the standard model data; and reporting the user tag and the standard identification state information to a scene service platform, wherein the scene service platform is used for acquiring the standard model data from a network, and filtering and/or inquiring the user tag based on the standard model data and the standard identification state information.
Optionally, the displaying the scene detail information of the target scene includes: acquiring functional parameters related to the user tags in the target scene; determining scene detail information corresponding to the functional parameters; and displaying the scene detail information of the target scene.
Optionally, the target feature information includes at least one of: fingerprint characteristic information, face characteristic information and pupil characteristic information; the user tag includes at least one of: name, nickname, job title, status, role.
According to another aspect of the embodiments of the present invention, there is also provided a device for constructing an intelligent scene, including: the acquisition module is used for acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is input in a biological recognition device in advance and is used for describing biological characteristics of a target object; a first response module, configured to modify, in response to a modification operation on the identification information, the identification information into a user tag, where the user tag is used to describe an identity feature of the target object; and the second response module is used for responding to the scene creating request of the biological identification equipment and building an intelligent scene corresponding to the target characteristic information based on the user label.
According to another aspect of the embodiment of the present invention, a nonvolatile storage medium is further provided, where the nonvolatile storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing any one of the above methods for constructing an intelligent scene.
According to another aspect of the embodiment of the present invention, a processor is further provided, where the processor is configured to run a program, where the program is configured to execute any one of the methods for constructing the intelligent scene when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to run the computer program to execute any one of the methods for constructing the intelligent scene.
In the embodiment of the invention, an intelligent scene building mode is adopted, and identification information corresponding to target characteristic information is obtained, wherein the target characteristic information is characteristic information which is input in a biological recognition device in advance and is used for describing biological characteristics of a target object; modifying the identification information into a user tag in response to the modification operation of the identification information, wherein the user tag is used for describing the identity characteristics of the target object; the scene creating request of the biological recognition equipment is responded, the intelligent scene corresponding to the target characteristic information is built based on the user label, and the purpose of building the intelligent scene by adopting the user label corresponding to the biological characteristic information is achieved, so that the technical effect that a user can conveniently understand the built intelligent scene is achieved, and the technical problem that in the prior art, the user cannot conveniently understand the intelligent scene by generally adopting the biological characteristic information collected by the biological recognition equipment to build the intelligent scene is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flowchart of steps for intelligent scene construction according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an alternative overall scheme of intelligent scene construction according to the prior art;
FIG. 3 is a flowchart of the steps of an alternative intelligent scene construction method according to an embodiment of the present invention;
FIG. 4 is a flowchart of steps of an alternative intelligent scene construction method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an alternative biometric device creation scenario in accordance with an embodiment of the present invention;
FIG. 6 is a flowchart illustrating an alternative process for displaying details of a target scene according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an alternative intelligent scene viewing flow, according to an embodiment of the invention;
fig. 8 is a schematic structural diagram of an alternative intelligent scene building apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present invention, there is provided an embodiment of an intelligent scenario construction method, it should be noted that the steps illustrated in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be executed in an order different from that here.
The technical scheme of the method embodiment can be executed in a mobile terminal, a computer terminal or a similar arithmetic device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, and the like. The mobile terminal may include one or more processors (which may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory for storing data. Optionally, the mobile terminal may further include a transmission device, an input/output device, and a display device for a communication function. It will be understood by those skilled in the art that the foregoing structural description is only illustrative and not restrictive of the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than described above, or have a different configuration than described above.
The memory may be configured to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the intelligent scene construction method in the embodiment of the present disclosure, and the processor executes various functional applications and data processing by running the computer program stored in the memory, that is, implements the intelligent scene construction method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner. The technical scheme of the embodiment of the method can be applied to various communication systems, such as: a Global System for Mobile communications (GSM) System, a Code Division Multiple Access (CDMA) System, a Wideband Code Division Multiple Access (WCDMA) System, a General Packet Radio Service (GPRS), a Long Term Evolution (Long Term Evolution, LTE) System, a Frequency Division Duplex (FDD) System, a Time Division Duplex (TDD), a Universal Mobile Telecommunications System (UMTS), a Worldwide Interoperability for Microwave Access (WiMAX) communication System, or a 5G System. Optionally, Device-to-Device (D2D for short) communication may be performed between multiple mobile terminals. Alternatively, the 5G system or the 5G network is also referred to as a New Radio (NR) system or an NR network.
The display device may be, for example, a touch screen type Liquid Crystal Display (LCD) and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured or stored in one or more processor-executable computer program products or readable non-volatile storage media.
Fig. 1 is a flowchart of an intelligent scene construction method according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step S102, acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is pre-recorded in a biological recognition device and is used for describing biological characteristics of a target object;
step S104, responding to the modification operation of the identification information, and modifying the identification information into a user tag, wherein the user tag is used for describing the identity characteristics of the target object;
and step S106, responding to a scene creation request of the biological identification equipment, and building an intelligent scene corresponding to the target characteristic information based on the user label.
In the embodiment of the invention, an intelligent scene building mode is adopted, and identification information corresponding to target characteristic information is obtained, wherein the target characteristic information is characteristic information which is input in a biological recognition device in advance and is used for describing biological characteristics of a target object; modifying the identification information into a user tag in response to the modification operation of the identification information, wherein the user tag is used for describing the identity characteristics of the target object; the scene creating request of the biological recognition equipment is responded, the intelligent scene corresponding to the target characteristic information is built based on the user label, and the purpose of building the intelligent scene by adopting the user label corresponding to the biological characteristic information is achieved, so that the technical effect that a user can conveniently understand the built intelligent scene is achieved, and the technical problem that in the prior art, the user cannot conveniently understand the intelligent scene by generally adopting the biological characteristic information collected by the biological recognition equipment to build the intelligent scene is solved.
In this embodiment, the user may modify the identification information through the client in the mobile terminal, and the cloud server modifies the identification information into the user tag input by the user in response to the modification operation on the identification information, so as to define an easily understood tag nickname for the target feature.
Alternatively, the user may enter the target feature information in the biometric device described above in advance. In an optional embodiment, the target feature information includes at least one of: fingerprint characteristic information, face characteristic information and pupil characteristic information; the user tag includes at least one of: name, nickname, job title, status, role.
Optionally, the user may bind the target feature information to a user identifier to the biometric device through the client, and store the target feature information configuration data in the user tag system. And the user creates and starts a template scene through the client, acquires the user tag from the user tag system to a scene center, and builds an intelligent scene corresponding to the target characteristic information based on the user tag.
Optionally, the cloud server may respond to the scene creation request for the biometric device at the same time, and build a plurality of intelligent scenes. For example, the target feature information may simultaneously construct an intelligent scene 1 and an intelligent scene 2, which respectively correspond to the biometric device 1 and the biometric device 2, and when the target feature information condition is satisfied, the biometric device 1 and the biometric device 2 may be triggered simultaneously.
Through the embodiment of the method, the following technical effects can be realized: a user can define an easily understood label nickname for the target characteristic information at the cloud end through the client; when a user creates a scene, the user sees the nickname of the label edited by the user; when the user views the scene details, the scene details described by the label nickname edited by the user are seen. The user can configure the user tag before the scene is created by the user, the user tag configured by the user can be acquired by the scene center from the cloud end when the scene is created, the target characteristic information can be seen on the client end to configure the scene, the experience degree of the user is improved, configuration errors caused by incomprehensible identification information can be reduced, the user tag configured by the user is acquired by the scene center through the cloud end when the scene details are checked to display the scene details, the understandability of the user is improved, and the experience of the user is greatly improved.
As an optional embodiment, fig. 2 is a general design scheme diagram of an intelligent scene construction method according to an embodiment of the present invention, and as shown in fig. 2, in this embodiment, when constructing an intelligent scene, the mobile terminal mainly performs general design from four aspects of user tag setting, scene creation, model definition, and state trigger scene. The user tag setting means: the mobile terminal binds the target characteristic information to the biological recognition equipment through a client, and stores the target characteristic information configuration data into a user label system; the scene creation means: the mobile terminal creates and starts a template scene through the mobile terminal, and acquires the user tag from the user tag system to a scene center; the above model definition means: a network service platform (such as a sea pole network) carries out data conversion according to the standard model, synchronizes a data conversion result to a security server, and simultaneously transmits the acquired standard data of the biological identification equipment to the scene center; the above state triggering scenario refers to: the biological identification device reports the information state of the biological identification device to the security server, the security server transmits the conversion result of the conversion standard model to the scene service platform (namely, a message platform), and then the message platform reports the state of the biological identification device to the scene center.
As an optional embodiment, as shown in fig. 2, the specific implementation steps of the intelligent scene construction method in the embodiment of the present application may include:
step S201, the user binds the target feature information to the biometric device through the client, in this embodiment, the target feature information is fingerprint information, and the biometric device is a device lock;
step S202, a user stores the fingerprint configuration data into a user label system through a client;
step S203, the network service platform (such as a marine polar network) carries out data exchange according to the standard model and synchronizes the data conversion result to the security server;
step S204, the equipment lock reports the state to a security server;
step S205, the security server synchronizes the data information and the lock state information of the network service platform to a message platform according to a standard model;
step S206, the message platform reports the lock state to a scene center;
step S207, a user creates and starts a template scene through a client, and synchronizes scene information to a scene center;
step S208, the network service platform synchronizes the lock standard model data to the scene center;
step S209, the network service platform transmits the acquired user tag data to the scene center.
In an optional embodiment, after building an intelligent scene corresponding to the target feature information based on the user tag, the method further includes:
step S302, responding to a scene query request, and acquiring a target scene corresponding to the scene query request from the intelligent scene;
step S304, displaying the scene detail information of the target scene, wherein the scene detail information of the target scene is obtained by describing the target scene by using the user tag.
In this embodiment, when the mobile terminal responds to the query request, that is, an unlocking request of the smart lock device is triggered, the client may obtain and display a target scene corresponding to the scene query request, and display a user tag corresponding to the target scene, where the scene detail information of the target scene is obtained by describing the target scene with the user tag.
Optionally, the same intelligent scene supports building of multiple target scenes, when the mobile terminal responds to the query request, the client side obtains and displays all target scenes corresponding to the scene query request, and simultaneously displays multiple user tags corresponding to the target scenes, so that a user can quickly select a proper target scene according to actual conditions.
As an optional embodiment, fig. 3 is a flowchart of a method for constructing an optional intelligent scene according to an embodiment of the present invention, and as shown in fig. 3, the method for acquiring identification information corresponding to target feature information includes the following steps:
step S402 of entering the target feature information into the biometric device in response to an information entry operation to the biometric device;
step S404, establishing a binding relationship between the target characteristic information and the biometric device;
step S406, determining the identification information corresponding to the target feature information based on the establishment time and/or the establishment sequence of the binding relationship.
In this embodiment, the mobile terminal triggers an information entry operation on the biometric device, and enters the target feature information into the biometric device, where the target feature information may be, but is not limited to, fingerprint feature information, face feature information, and pupil feature information.
Optionally, under the condition that the target feature information is already recorded in the biometric device, the user binds the target feature information to the biometric device through the client, and establishes a binding relationship between the target feature information and the biometric device. In this embodiment, the target feature information configuration data is stored in the user tag system.
In an optional embodiment, in response to a scene creation request to the biometric device, building an intelligent scene corresponding to the target feature information based on the user tag includes:
step S502, in response to the scene creation request, creates and enables an initial scene of the biometric device.
And step S504, modifying the scene detail information of the initial scene based on the user label, and building the intelligent scene corresponding to the target characteristic information.
In this embodiment, the mobile terminal triggers the scene creation request, enters the target feature information into the biometric device, establishes a binding relationship between the target feature information and the biometric device, and determines the identification information corresponding to the target feature information based on an establishment time and/or an establishment sequence of the binding relationship, where a scene created and enabled at this time is an initial scene of the biometric device.
Optionally, based on the target feature information configuration data stored in the user tag system, modifying the scene detail information of the initial scene into the user tag, and building the intelligent scene corresponding to the target feature information, where the user tag includes at least one of: name, nickname, job title, status, role.
As an optional embodiment, fig. 4 is a flowchart of steps of an optional intelligent scene construction method according to an embodiment of the present invention, and as shown in fig. 4, the intelligent scene construction method further includes the following method steps:
step S602, detecting the current identification state information of the biological identification device, and reporting the identification state information to a security server;
step S604, acquiring standard model data corresponding to the biological recognition device from a network by using the security server, and converting the current recognition state information into standard recognition state information based on the standard model data;
step S606, reporting the user tag and the standard identification state information to a scene service platform.
Optionally, the network service platform (such as a marine grid) performs data conversion according to the standard model, synchronizes the data conversion result to the security server, and transmits the acquired standard data of the biometric device to the scene center. The scene service platform is used for acquiring the standard model data from a network, and filtering and/or inquiring the user tag based on the standard model data and the standard identification state information.
In this embodiment, fig. 5 is a schematic diagram of an optional scenario created by a biometric device according to an embodiment of the present invention, and as shown in fig. 5, the intelligent scenario created in the embodiment of the present invention can further implement the following functions:
filtering user tags function: when the user tag is not matched with the standard identification state information, the standard identification state information cannot respond to the intelligent scene and cannot effectively execute the operation of the biological identification equipment;
and (4) query function: and inquiring user tag information to obtain functional parameters related to the user tag in the target scene, wherein the functional parameters related to the user tag can be but not limited to family information, user information and network card address mac functional information.
Optionally, the biometric device reports the information state of the biometric device to the security server, the security server transmits the conversion result of the conversion standard model to the scenario service platform (i.e., a message platform), and then the message platform reports the state of the biometric device to the scenario center.
As an alternative embodiment, fig. 6 is a flowchart of steps for optionally showing scene detail information of the target scene according to an embodiment of the present invention, and as shown in fig. 6, the method for showing scene detail information of the target scene further includes the following method steps:
step S702, acquiring functional parameters related to the user tag in the target scene;
step S704, determining scene detail information corresponding to the function parameters;
and step S706, displaying the scene detail information of the target scene.
In an optional embodiment, fig. 7 is a schematic diagram of an optional intelligent scene viewing process according to an embodiment of the present invention, as shown in fig. 7, in this embodiment, the scene center may further filter out a user tag function, and query the user tag set according to the family and mac functions. Optionally, the user tag system may further return the user tag set to the scene center, reset a value range of the user tag function, and return a new value range to the scene center.
Optionally, the function parameters related to the user tag may be, but not limited to, family information, user information, and mac function information. In this embodiment, the scene detail information of the target scene is displayed at the client.
According to an embodiment of the present invention, an apparatus embodiment for implementing the intelligent scene construction method is further provided, fig. 8 is a schematic structural diagram of an intelligent scene construction apparatus according to an embodiment of the present invention, and as shown in fig. 8, the intelligent scene construction apparatus includes an obtaining module 80, a first response module 82, and a second response module 84, where:
an obtaining module 80, configured to obtain identification information corresponding to target feature information, where the target feature information is feature information that is previously entered in a biometric device, and the target feature information is used to describe a biometric feature of a target object; a first response module 82, configured to modify, in response to a modification operation on the identification information, the identification information into a user tag, where the user tag is used to describe an identity feature of the target object; a second response module 84, configured to respond to the scene creation request for the biometric device, and build an intelligent scene corresponding to the target feature information based on the user tag.
It should be noted that the above modules may be implemented by software or hardware, for example, for the latter, the following may be implemented: the modules can be located in the same processor; alternatively, the modules may be located in different processors in any combination.
It should be noted that the acquiring module 80, the first responding module 82 and the second responding module 84 correspond to steps S102 to S106 in embodiment 1, and the modules are the same as the corresponding steps in implementation examples and application scenarios, but are not limited to the disclosure in embodiment 1. It should be noted that the modules described above may be implemented in a computer terminal as part of an apparatus.
It should be noted that, reference may be made to the relevant description in embodiment 1 for alternative or preferred embodiments of this embodiment, and details are not described here again.
The device for constructing the intelligent scene may further include a processor and a memory (device), the construction of the intelligent scene and the like are stored in the memory as a program unit, and the processor executes the program unit stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory, wherein one or more than one kernel can be arranged. The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to an embodiment of the present invention, there is also provided an embodiment of a nonvolatile storage medium. Optionally, in this embodiment, the nonvolatile storage medium includes a stored program, and the device where the nonvolatile storage medium is located is controlled to execute the method for building any one of the intelligent scenes when the program runs.
Optionally, in this embodiment, the nonvolatile storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals, and the nonvolatile storage medium includes a stored program.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is recorded in a biological recognition device in advance and is used for describing biological characteristics of a target object;
s2, responding to the modification operation of the identification information, modifying the identification information into a user label, wherein the user label is used for describing the identity characteristics of the target object;
and S3, in response to the scene creation request to the biometric device, creating an intelligent scene corresponding to the target feature information based on the user tag.
According to the embodiment of the present invention, an embodiment of an electronic device is further provided, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to run the computer program to execute the method for building any one of the intelligent scenes.
According to an embodiment of the present invention, there is also provided an embodiment of a computer program product, which is adapted to execute a program initialized with any one of the steps of the building method of the intelligent scene described above when executed on a data processing device.
Optionally, the computer program product described above, when being executed on a data processing device, is adapted to perform a procedure for initializing the following method steps:
s1, acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is recorded in a biological recognition device in advance and is used for describing biological characteristics of a target object;
s2, responding to the modification operation of the identification information, modifying the identification information into a user label, wherein the user label is used for describing the identity characteristics of the target object;
and S3, in response to the scene creation request to the biometric device, creating an intelligent scene corresponding to the target feature information based on the user tag.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method for building an intelligent scene is characterized by comprising the following steps:
acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is pre-recorded in biological identification equipment and is used for describing biological characteristics of a target object;
responding to modification operation of the identification information, and modifying the identification information into a user tag, wherein the user tag is used for describing the identity characteristics of the target object;
and responding to a scene creating request of the biological recognition equipment, and constructing an intelligent scene corresponding to the target characteristic information based on the user label.
2. The method of claim 1, wherein after building an intelligent scene corresponding to the target feature information based on the user tag, the method further comprises:
responding to a scene query request, and acquiring a target scene corresponding to the scene query request from the intelligent scene;
and displaying the scene detail information of the target scene, wherein the scene detail information of the target scene is obtained by describing the target scene by adopting the user tag.
3. The method of claim 1, wherein obtaining identification information corresponding to the target feature information comprises:
entering the target feature information in the biometric device in response to an information entry operation to the biometric device;
establishing a binding relationship between the target characteristic information and the biological identification device;
and determining the identification information corresponding to the target characteristic information based on the establishment time and/or the establishment sequence of the binding relationship.
4. The method according to claim 1, wherein building an intelligent scene corresponding to the target feature information based on the user tag in response to a scene creation request to the biometric device comprises:
responding to the scene creation request, creating and enabling an initial scene of the biological recognition device;
and modifying the scene detail information of the initial scene based on the user label, and building the intelligent scene corresponding to the target characteristic information.
5. The method of claim 1, further comprising:
detecting current identification state information of the biological identification equipment, and reporting the identification state information to a security server;
acquiring standard model data corresponding to the biological recognition equipment from a network by adopting the security server, and converting the current recognition state information into standard recognition state information based on the standard model data;
and reporting the user tag and the standard identification state information to a scene service platform, wherein the scene service platform is used for acquiring the standard model data from a network and filtering and/or inquiring the user tag based on the standard model data and the standard identification state information.
6. The method of claim 2, wherein presenting scene detail information of the target scene comprises:
acquiring functional parameters related to the user tags in the target scene;
determining scene detail information corresponding to the functional parameters;
and displaying the scene detail information of the target scene.
7. The method according to any one of claims 1 to 6, wherein the target feature information comprises at least one of: fingerprint characteristic information, face characteristic information and pupil characteristic information; the user tag includes at least one of: name, nickname, job title, status, role.
8. The utility model provides a device of buildding of intelligent scene which characterized in that includes:
the acquisition module is used for acquiring identification information corresponding to target characteristic information, wherein the target characteristic information is characteristic information which is input in the biological identification equipment in advance, and the target characteristic information is used for describing biological characteristics of a target object;
the first response module is used for responding to the modification operation of the identification information and modifying the identification information into a user tag, wherein the user tag is used for describing the identity characteristics of the target object;
and the second response module is used for responding to the scene creating request of the biological identification equipment and building an intelligent scene corresponding to the target characteristic information based on the user tag.
9. A non-volatile storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to execute the method of building an intelligent scenario according to any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to perform the method for building an intelligent scene according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110701574.1A CN113434844A (en) | 2021-06-23 | 2021-06-23 | Intelligent scene building method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110701574.1A CN113434844A (en) | 2021-06-23 | 2021-06-23 | Intelligent scene building method and device, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113434844A true CN113434844A (en) | 2021-09-24 |
Family
ID=77753708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110701574.1A Pending CN113434844A (en) | 2021-06-23 | 2021-06-23 | Intelligent scene building method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113434844A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434215A (en) * | 2021-06-28 | 2021-09-24 | 青岛海尔科技有限公司 | Information loading method and device, storage medium and processor |
CN114124692A (en) * | 2021-10-29 | 2022-03-01 | 青岛海尔科技有限公司 | Intelligent device skill access method and device, electronic device and storage medium |
CN115327934A (en) * | 2022-07-22 | 2022-11-11 | 青岛海尔科技有限公司 | Intelligent household scene recommendation method and system, storage medium and electronic device |
CN116521159A (en) * | 2023-07-05 | 2023-08-01 | 中国科学院文献情报中心 | Knowledge service platform zero code construction method and system based on scene driving |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106773764A (en) * | 2016-12-30 | 2017-05-31 | 深圳智乐信息科技有限公司 | The method and system that a kind of Intelligent household scene is set and controlled |
CN110045621A (en) * | 2019-04-12 | 2019-07-23 | 深圳康佳电子科技有限公司 | Intelligent scene processing method, system, smart home device and storage medium |
CN111459439A (en) * | 2020-04-08 | 2020-07-28 | 深圳康佳电子科技有限公司 | Information display method, intelligent home server and storage medium |
CN111650840A (en) * | 2019-03-04 | 2020-09-11 | 华为技术有限公司 | Intelligent household scene arranging method and terminal |
CN111766798A (en) * | 2019-04-02 | 2020-10-13 | 青岛海信智慧家居系统股份有限公司 | Intelligent household equipment control method and device |
CN112073471A (en) * | 2020-08-17 | 2020-12-11 | 青岛海尔科技有限公司 | Device control method and apparatus, storage medium, and electronic apparatus |
CN112306968A (en) * | 2020-11-10 | 2021-02-02 | 珠海格力电器股份有限公司 | Scene establishing method and device |
CN112506070A (en) * | 2020-12-16 | 2021-03-16 | 珠海格力电器股份有限公司 | Control method and device of intelligent household equipment |
CN112737899A (en) * | 2020-11-30 | 2021-04-30 | 青岛海尔科技有限公司 | Intelligent device management method and device, storage medium and electronic device |
-
2021
- 2021-06-23 CN CN202110701574.1A patent/CN113434844A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106773764A (en) * | 2016-12-30 | 2017-05-31 | 深圳智乐信息科技有限公司 | The method and system that a kind of Intelligent household scene is set and controlled |
CN111650840A (en) * | 2019-03-04 | 2020-09-11 | 华为技术有限公司 | Intelligent household scene arranging method and terminal |
CN111766798A (en) * | 2019-04-02 | 2020-10-13 | 青岛海信智慧家居系统股份有限公司 | Intelligent household equipment control method and device |
CN110045621A (en) * | 2019-04-12 | 2019-07-23 | 深圳康佳电子科技有限公司 | Intelligent scene processing method, system, smart home device and storage medium |
CN111459439A (en) * | 2020-04-08 | 2020-07-28 | 深圳康佳电子科技有限公司 | Information display method, intelligent home server and storage medium |
CN112073471A (en) * | 2020-08-17 | 2020-12-11 | 青岛海尔科技有限公司 | Device control method and apparatus, storage medium, and electronic apparatus |
CN112306968A (en) * | 2020-11-10 | 2021-02-02 | 珠海格力电器股份有限公司 | Scene establishing method and device |
CN112737899A (en) * | 2020-11-30 | 2021-04-30 | 青岛海尔科技有限公司 | Intelligent device management method and device, storage medium and electronic device |
CN112506070A (en) * | 2020-12-16 | 2021-03-16 | 珠海格力电器股份有限公司 | Control method and device of intelligent household equipment |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434215A (en) * | 2021-06-28 | 2021-09-24 | 青岛海尔科技有限公司 | Information loading method and device, storage medium and processor |
CN113434215B (en) * | 2021-06-28 | 2023-06-16 | 青岛海尔科技有限公司 | Information loading method and device, storage medium and processor |
CN114124692A (en) * | 2021-10-29 | 2022-03-01 | 青岛海尔科技有限公司 | Intelligent device skill access method and device, electronic device and storage medium |
CN114124692B (en) * | 2021-10-29 | 2024-03-22 | 青岛海尔科技有限公司 | Intelligent equipment skill access method and device, electronic equipment and storage medium |
CN115327934A (en) * | 2022-07-22 | 2022-11-11 | 青岛海尔科技有限公司 | Intelligent household scene recommendation method and system, storage medium and electronic device |
CN116521159A (en) * | 2023-07-05 | 2023-08-01 | 中国科学院文献情报中心 | Knowledge service platform zero code construction method and system based on scene driving |
CN116521159B (en) * | 2023-07-05 | 2023-09-01 | 中国科学院文献情报中心 | Knowledge service platform zero code construction method and system based on scene driving |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113434844A (en) | Intelligent scene building method and device, storage medium and electronic equipment | |
CN108062367B (en) | Data list uploading method and terminal thereof | |
CN105141496A (en) | Instant communication message playback method and device | |
CN111367562A (en) | Data acquisition method and device, storage medium and processor | |
CN112073471A (en) | Device control method and apparatus, storage medium, and electronic apparatus | |
CN106845267B (en) | The processing method and mobile terminal of applicating history information | |
CN111880887B (en) | Message interaction method and device, storage medium and electronic equipment | |
CN104580930A (en) | Group photo taking method and system | |
CN104506594A (en) | Data communication method and system for social application system | |
CN103973550A (en) | Method, system and device for rapidly and intelligently identifying instant messaging application ID (identity) number and carrying out instant messaging | |
CN104936167A (en) | Card writing method, system and equipment | |
TW201633240A (en) | Methods and devices for processing information card | |
CN109697129A (en) | A kind of information sharing method, equipment and computer readable storage medium | |
CN108241515B (en) | Application shortcut establishing method and terminal | |
CN108243085B (en) | Method and device for pushing and setting communication group state identifier | |
CN107808106A (en) | A kind of session content methods of exhibiting and device with scene handoff functionality | |
CN108174378B (en) | Number identification method, device, terminal and storage medium | |
CN105813061B (en) | A kind of multi-card terminal resource acquiring method and multi-card terminal | |
CN111722892A (en) | System language switching method, electronic equipment and storage device | |
CN113434190B (en) | Data processing method and device, storage medium and electronic equipment | |
CN105933339B (en) | A kind of application login method and mobile terminal | |
CN113326186A (en) | Software testing method and device, electronic equipment and storage medium | |
CN103763087A (en) | Information processing method and electronic devices | |
CN114299073A (en) | Image segmentation method, image segmentation device, storage medium, and computer program | |
CN104883747A (en) | Information processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210924 |