CN117690000A - Internet of things data interaction method and system based on artificial intelligence - Google Patents
Internet of things data interaction method and system based on artificial intelligence Download PDFInfo
- Publication number
- CN117690000A CN117690000A CN202311260501.9A CN202311260501A CN117690000A CN 117690000 A CN117690000 A CN 117690000A CN 202311260501 A CN202311260501 A CN 202311260501A CN 117690000 A CN117690000 A CN 117690000A
- Authority
- CN
- China
- Prior art keywords
- data
- internet
- module
- information
- things
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 21
- 230000009471 action Effects 0.000 claims abstract description 100
- 238000004458 analytical method Methods 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims description 17
- 230000005540 biological transmission Effects 0.000 claims description 9
- 238000007405 data analysis Methods 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 238000010191 image analysis Methods 0.000 claims description 3
- 238000013479 data entry Methods 0.000 claims description 2
- 241001465382 Physalis alkekengi Species 0.000 claims 1
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000009434 installation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y20/00—Information sensed or collected by the things
- G16Y20/40—Information sensed or collected by the things relating to personal data, e.g. biometric data, records or preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W76/00—Connection management
- H04W76/10—Connection setup
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an artificial intelligence-based data interaction method of the Internet of things, which comprises the following steps: opening an information input port, and acquiring a data acquisition task input by a user based on the information input port; acquiring user input data, performing positioning analysis according to the user data, and storing; collecting and analyzing the collected data through a local server; entering action information of a user; shooting a site background picture and uploading the site background picture to a cloud server; and fusing the actions and the backgrounds, wherein a plurality of actions can be selected correspondingly under one background, and each action corresponds to one instruction. The invention ensures that the command action made under the background picture can be accurately identified, and can complete the identification of the action without particularly accurate action, and compared with the traditional action identification in man-machine interaction, the invention has the advantages of lower accuracy requirement on the action, easier identification, command making and equipment control.
Description
Technical Field
The invention relates to the technical field of man-machine interaction, in particular to an Internet of things data interaction method and system based on artificial intelligence.
Background
The Internet of things originates in the media field and is the third revolution of the information technology industry. The internet of things refers to connecting any object with a network through information sensing equipment according to a stipulated protocol, and carrying out information exchange and communication on the object through an information transmission medium so as to realize the functions of intelligent identification, positioning, tracking, supervision and the like; in short, all articles are connected with the Internet through the information sensing equipment to exchange information, namely the articles are in information, so that intelligent identification and management are realized, namely the Internet of things.
Human-computer interaction is a study of the interaction relationship between a research system and a user. The system may be a variety of machines, as well as computerized systems and software. Human-machine interaction interfaces generally refer to portions that are visible to a user. The user communicates with the system through the man-machine interaction interface and operates the system, and the intelligent work efficiency and life quality can be improved by establishing man-machine interaction of the Internet of things.
Man-machine interaction includes speech recognition, action recognition etc., and current action recognition based on thing networking data interaction of artificial intelligence exists and discerns inadequately accurate stability, in the use, has the action that the user made inadequately accurate easily, leads to the characteristic of discernment to be clear inadequately, causes the user to make action many times, just can discern accurate problem, causes the inefficiency.
Disclosure of Invention
Based on the technical problems in the background technology, the invention provides an Internet of things data interaction method based on artificial intelligence.
The invention provides an artificial intelligence-based data interaction system of the Internet of things, which comprises the following steps:
opening an information input port, and acquiring a data acquisition task input by a user based on the information input port;
acquiring user input data, performing positioning analysis according to the user data, and storing;
collecting and analyzing the collected data through a local server;
entering action information of a user;
shooting a site background picture and uploading the site background picture to a cloud server;
fusing actions and backgrounds, wherein a plurality of actions can be selected correspondingly under one background, and each action corresponds to one instruction;
identifying action information and a background picture, comparing the action information and information stored in the cloud server, and when actions made under the background are consistent with actions and the background collected by the local server, making corresponding instructions;
when the action made in the background is inconsistent with the action and the background collected by the cloud server, the instruction does not pass.
Preferably, background pictures of a plurality of different sites are shot, each scene shoots a plurality of pictures, the pictures of the same scene are transmitted to a cloud server, and the pictures are customized to be a scene one, a scene two and the like.
Preferably, independent connection between scene one, scene, etc. is established, without mutual interference.
1. The scenes are connected with the same cloud server through WAPI connection or Bluetooth connection, so that information interaction is realized.
2. In order to ensure that no interference is formed between a plurality of scenes and no automatic connection is established between the scenes, but bridging between the scenes can be selected and set through a cloud server, for example, WAPI connection or Bluetooth connection of the scene I and the scene III can be selected autonomously, information sharing can be established between the scenes of the scene I and the scene IV which are mutually connected, namely, the two background images can be shared in real time, actions under the two background images can be shared, but on the premise that logging actions under the two background images are ensured not to be repeated, bridging between the scenes can be selected by a user, connection between the scenes is established, installation of one camera can be omitted, but the corresponding relation and stability are reduced, the range of recognition of the action recognition unit and the image background recognition unit is enlarged, and compared with the independent connection mode, the accurate recognition action is reduced.
Preferably, the connection between the bracelet and the cloud server is established by wearing the bracelet, a Wireless PAN is arranged in the bracelet, and the connection between the bracelet and the cloud server is established by means of the Wireless PA.
Wherein the bracelet includes: the system comprises a main control board, an information interaction module connected to the main control board and a power module for supplying power to each module of the man-machine interaction equipment; the power supply module comprises a power supply input circuit for connecting a power supply and a battery module for storing electric energy.
Preferably, the information interaction module includes: the display module is used for displaying the man-machine interaction interface and comprises a touch display screen.
Preferably, the information interaction module includes: and the vibration module is used for outputting physical feedback information.
Preferably, the method comprises the following steps:
and a data input module: the book searching data input module comprises: the request receiving unit is used for opening the information input port and receiving a data acquisition request input by a user based on the information input port;
the information comparison unit is used for acquiring account information of the user and comparing the account information with prestored registration information;
the execution unit is used for receiving a data acquisition task input by a user when the account information is the same as the registration information;
the number comparison unit is used for recording the number of errors when the account information is different from the registration information, and comparing the number of errors with a preset number threshold;
and a data analysis module: the data analysis module comprises artificial intelligent image analysis and data analysis, and processing image and data classification and identification;
and a data processing module: the data processing module stores and gathers the collected and analyzed images and actions, receives the collected data fed back by the internet of things equipment in real time, generates evaluation information of the internet of things equipment according to the collected data, and sends the evaluation information to the internet of things equipment;
the data processing module is electrically connected with the state determining unit and is used for acquiring working parameters of all the Internet of things equipment in real time and determining the working state of the Internet of things equipment according to the working parameters; the working state contains the occupancy rate of computing resources;
and the occupancy rate reading unit is used for marking the acquisition equipment in the model network according to the calculated resource occupancy rate and reading the calculated resource occupancy rate of each acquisition equipment.
The state determination unit includes:
the standard determining subunit is used for acquiring the calibration parameters of the internet of things equipment and determining the standard capacity of the internet of things equipment according to the calibration parameters;
the ratio determining subunit is used for acquiring physical parameters of the equipment of the Internet of things in real time and determining the load ratio of the equipment of the Internet of things according to the physical parameters;
the correction subunit is used for correcting the reference capacity fraction according to the load proportion to obtain capacity data of all the Internet of things equipment;
the computing subunit is used for sequentially acquiring the task quantity of each Internet of things device, reading the capacity data of the corresponding Internet of things device, and determining the computing resource occupancy rate of the Internet of things device according to the task quantity and the capacity data.
Preferably, the identification unit includes:
action recognition unit: the human body actions are detected and classified in time dimension and space dimension, and mainly identify what the actions are.
Picture background recognition unit: and the shot picture background is identified and analyzed, fusion with action identification is realized, and an action instruction is quickly and accurately identified by combining an action identification unit.
A camera module: the picture background recognition unit and the action recognition unit are connected with the camera module in a wireless mode, and pictures and actions are shot through the camera module.
Preferably, the equipment machine comprises equipment which can be networked, such as mechanical equipment, household lamps, electric equipment and the like.
Firstly, establishing bridging of WAPI of mechanical equipment, household lamps, electrical equipment and the like with a local server, and sending a target position acquisition request of a target WAPI module to the local server through any WAPI module;
under the condition that the random WAPI module receives a target position returned by the local server, planning a bridging transmission path according to the target position;
and sending the instruction to the target WAPI module according to the bridging transmission path.
The beneficial effects of the invention are as follows:
1. firstly, user data are input, action information of a user is input, a field background picture is shot and uploaded to a cloud server, an action picture and a background picture are identified through a camera, command actions made under the background picture are ensured to be more accurately identified, the actions can be identified without particularly accurate actions, more characteristic parts can be captured compared with the traditional action identification in man-machine interaction, the accuracy requirements for making actions are reduced, the commands are more easily identified, and the work of equipment is controlled.
2. The invention sets the bracelet, the bracelet comprises a main control board, a power supply module, a pool module, a power supply input circuit, a display module and an information interaction module, wherein the information interaction module comprises a vibration module, the bracelet is started, connection with a cloud server is established in a Wireless PA mode, a user wears the bracelet, and the user makes action instructions under a specific background, so that the connection between people and machines is enhanced, the identification of the action instructions is improved, and the accuracy is improved.
Drawings
FIG. 1 is a flow chart of an artificial intelligence based data interaction method of the Internet of things;
FIG. 2 is a first sub-flowchart of an Internet of things data interaction method based on artificial intelligence according to the present invention;
FIG. 3 is a second sub-flowchart of an Internet of things data interaction method based on artificial intelligence according to the present invention;
FIG. 4 is a network puffs of an artificial intelligence based data interaction method of the Internet of things;
fig. 5 is a partial module connection diagram of an artificial intelligence-based data interaction method of the internet of things.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the specific implementations described herein are only for illustrating and explaining the embodiments of the present application, and are not intended to limit the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that, in the embodiment of the present application, directional indications (such as up, down, left, right, front, and rear … …) are referred to, and the directional indications are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be regarded as not exist and not within the protection scope of the present application.
1-5, an artificial intelligence based data interaction method for the Internet of things, the method comprising:
opening an information input port, and acquiring a data acquisition task input by a user based on the information input port;
acquiring user input data, performing positioning analysis according to the user data, and storing;
collecting and analyzing the collected data through a local server;
the method comprises the steps of inputting action information of a user, wherein the identified action self-definition comprises a dotted head, a shaking head, a turning head, a waving including double arms, and more obvious actions including a left hand and a right hand waving arms, a shrugging shoulder and the like;
shooting a field background picture and uploading the field background picture to a cloud server, wherein the background picture mainly identifies a characteristic picture, and the characteristic picture comprises relatively fixed objects and shapes as characteristic identification, so that corresponding can be made as long as a plurality of characteristic pictures in an image can be identified;
fusing actions and backgrounds, wherein a plurality of actions can be correspondingly selected under one background, each action corresponds to an instruction, for example, under one characteristic background, the background comprises at least more than two characteristic parts, the characteristic parts are shooting pictures uploaded to a cloud server, the action instructions are selected, the action instructions under the background can be a plurality of, for example, when an instruction corresponding to head shaking is made under the background, a lamp is turned off, an instruction corresponding to head turning is turned on again, an instruction corresponding to head turning is turned on under the background, and the air conditioner is turned off again;
identifying action information and a background picture, comparing the action information and information stored in the cloud server, and when actions made under the background are consistent with actions and the background collected by the local server, instructing the Cache controller to make corresponding instructions;
when the action made in the background is inconsistent with the action and the background collected by the cloud server, the instruction does not pass.
Preferably, background pictures of a plurality of different sites are shot, each scene shoots a plurality of pictures, the pictures of the same scene are transmitted to a cloud server, and the pictures are customized to be a first scene and a second scene.
When the method is implemented, a dynamic big data acquisition mode is adopted, a plurality of parts, which need to be calculated, of the Internet of things are built on a cloud to form an in-network cloud platform, the parts, which need not big calculation resources, of the parts can be placed on a local server to be calculated, and the parts, which need big calculation resources, of the parts can be calculated through an out-of-network cloud platform which interacts with the parts; in the invention, the off-network cloud platform collects the data obtained by calculation and calculation in the process of off-network calculation, the first database can be formed as big data after the data are collected, meanwhile, the first database is more and more abundant along with the increase of off-network operation times, and the pressure of data processing and analysis is shared by a local server and the off-network cloud platform, so that the accuracy of data calculation is ensured.
In the invention, independent connection between the first scene and the second scene is established without mutual interference; 1. the scenes are connected with the same cloud server through WAPI connection or Bluetooth connection, so that information interaction is realized.
2. In order to ensure that no interference is formed between a plurality of scenes and no automatic connection is established between the scenes, but bridging between the scenes can be selected and set through a cloud server, for example, WAPI connection or Bluetooth connection of the scene I and the scene III can be selected autonomously, information sharing can be established between the scenes of the scene I and the scene IV which are mutually connected, namely, the two background images can be shared in real time, actions under the two background images can be shared, but on the premise that logging actions under the two background images are ensured not to be repeated, bridging between the scenes can be selected by a user, connection between the scenes is established, installation of one camera can be omitted, but the corresponding relation and stability are reduced, the range of recognition of the action recognition unit and the image background recognition unit is enlarged, and compared with the independent connection mode, the accurate recognition action is reduced.
In the invention, the connection between the bracelet and the cloud server is established by wearing the bracelet, the Wireless PAN is arranged in the bracelet, and the connection with the cloud server is established by the Wireless PA, so that signal interference can occur when Bluetooth equipment is required to be used simultaneously when a Wireless network with the frequency band of 2.4GHz is used for avoiding the signal interference, and therefore, the selection of the Wireless network with the frequency band of 5GHz or the Bluetooth equipment with the frequency band higher can be considered.
Further, wherein the bracelet comprises: the system comprises a main control board, an information interaction module connected to the main control board and a power module for supplying power to each module of the man-machine interaction equipment; the power supply module comprises a power supply input circuit for connecting a power supply and a battery module for storing electric energy.
Further, the information interaction module includes: and the display module is used for displaying the man-machine interaction interface.
Further, the display module comprises a touch display screen.
Further, the information interaction module includes: and the vibration module is used for outputting physical feedback information.
The invention comprises the following steps:
and a data input module: the data entry module includes: the request receiving unit is used for opening the information input port and receiving a data acquisition request input by a user based on the information input port;
the information comparison unit is used for acquiring account information of the user and comparing the account information with prestored registration information;
the execution unit is used for receiving a data acquisition task input by a user when the account information is the same as the registration information;
the number comparison unit is used for recording the number of errors when the account information is different from the registration information, and comparing the number of errors with a preset number threshold;
and a data analysis module: the data analysis module comprises artificial intelligent image analysis and data analysis, and processing image and data classification and identification;
and a data processing module: the data processing module stores and gathers the collected and analyzed images and actions, receives the collected data fed back by the internet of things equipment in real time, generates evaluation information of the internet of things equipment according to the collected data, and sends the evaluation information to the internet of things equipment;
the data processing module is electrically connected with the state determining unit and is used for acquiring working parameters of all the Internet of things equipment in real time and determining the working state of the Internet of things equipment according to the working parameters; the working state contains the occupancy rate of computing resources;
and the occupancy rate reading unit is used for marking the acquisition equipment in the model network according to the calculated resource occupancy rate and reading the calculated resource occupancy rate of each acquisition equipment.
The state determination unit includes:
the standard determining subunit is used for acquiring the calibration parameters of the internet of things equipment and determining the standard capacity of the internet of things equipment according to the calibration parameters;
the ratio determining subunit is used for acquiring physical parameters of the equipment of the Internet of things in real time and determining the load ratio of the equipment of the Internet of things according to the physical parameters;
the correction subunit is used for correcting the reference capacity fraction according to the load proportion to obtain capacity data of all the Internet of things equipment;
the computing subunit is used for sequentially acquiring the task quantity of each Internet of things device, reading the capacity data of the corresponding Internet of things device, and determining the computing resource occupancy rate of the Internet of things device according to the task quantity and the capacity data.
In the present invention, the identification unit includes:
action recognition unit: the human body actions are detected and classified in time dimension and space dimension, and mainly identify what the actions are.
Picture background recognition unit: and the shot picture background is identified and analyzed, fusion with action identification is realized, and an action instruction is quickly and accurately identified by combining an action identification unit.
A camera module: the picture background recognition unit and the action recognition unit are connected with the camera module in a wireless mode, and pictures and actions are shot through the camera module.
In the invention, the equipment comprises equipment capable of networking, such as mechanical equipment, household lamps, electrical equipment and the like, and the bridging between the equipment and WAPI of a local server is firstly established, and a target position acquisition request of a target WAPI module is sent to the local server through any WAPI module;
under the condition that the random WAPI module receives a target position returned by the local server, planning a bridging transmission path according to the target position;
and sending the instruction to the target WAPI module according to the bridging transmission path.
According to the invention, firstly, user data are input, action information of a user is input, a field background picture is shot and uploaded to a cloud server, the action picture and the background picture are identified through a camera, the action of instructions made under the background picture can be identified more accurately, the identification of the actions can be completed without particularly accurate actions, compared with the traditional action identification in man-machine interaction, more characteristic parts can be captured, the accuracy requirements for the actions are reduced, the identification is easier, the instructions are made, the operation of equipment is controlled, the bracelet comprises a main control board, a power module, a pool module, a power input circuit, a display module and an information interaction module, wherein the information interaction module comprises a vibration module, the bracelet is started, the connection with the cloud server is established in a manner of a Wireless PA, the user wears the bracelet, the user makes action instructions under a specific background, the connection between people and machines is enhanced, the identification of the action instructions is improved, and the accuracy is improved.
The bracelet comprises a main control board, an information interaction module connected to the main control board and a power module for supplying power to each module of the man-machine interaction equipment; wherein, power module includes the battery module that is used for connecting the power input circuit of power and is used for storing the electric energy, information interaction module includes: the display module is used for displaying a human-computer interaction interface, the display module comprises a touch display screen, and the information interaction module comprises: and the vibration module is used for outputting physical feedback information, and after the accurate identification of the instruction is completed, the vibration module completes vibration identification and is convenient for touch control through the display screen.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.
Claims (9)
1. An artificial intelligence-based internet of things data interaction method is characterized by comprising the following steps:
opening an information input port, and acquiring a data acquisition task input by a user based on the information input port;
acquiring user input data, performing positioning analysis according to the user data, and storing;
collecting and analyzing the collected data through a local server;
entering action information of a user;
shooting a site background picture and uploading the site background picture to a cloud server;
fusing actions and backgrounds, wherein a plurality of actions can be selected correspondingly under one background, and each action corresponds to one instruction;
identifying action information and a background picture, comparing the action information and information stored in the cloud server, and when actions made under the background are consistent with actions and the background collected by the local server, making corresponding instructions;
when the actions made under the background are inconsistent with the actions collected by the cloud server and the background, the instructions do not pass;
through wearing the bracelet, establish bracelet and cloud server's connection, the built-in Wireless PAN of bracelet establishes with the cloud server's of mode through Wireless PA's connection, and wherein the bracelet includes: the system comprises a main control board, an information interaction module connected to the main control board and a power module for supplying power to each module of the man-machine interaction equipment; the power supply module comprises a power supply input circuit for connecting a power supply and a battery module for storing electric energy.
2. The method for data interaction of the internet of things based on artificial intelligence according to claim 1, wherein background pictures of a plurality of different sites are shot, each scene shoots a plurality of pictures, the pictures of the same scene are transmitted to a cloud server, and the pictures are customized to be a first scene and a second scene.
3. The data interaction method of the internet of things based on artificial intelligence according to claim 2, wherein independent connection and mutual noninterference between a first scene and a second scene are established.
4. The data interaction method of the internet of things based on artificial intelligence of claim 1, wherein the information interaction module comprises: the display module is used for displaying the man-machine interaction interface and comprises a touch display screen.
5. The method and system for data interaction of internet of things based on artificial intelligence as claimed in claim 4, wherein the information interaction module comprises: and the vibration module is used for outputting physical feedback information.
6. The data interaction system of the internet of things based on artificial intelligence according to claim 1, comprising the steps of:
and a data input module: the data entry module includes: the request receiving unit is used for opening the information input port and receiving a data acquisition request input by a user based on the information input port;
the information comparison unit is used for acquiring account information of the user and comparing the account information with prestored registration information;
the execution unit is used for receiving a data acquisition task input by a user when the account information is the same as the registration information;
the number comparison unit is used for recording the number of errors when the account information is different from the registration information, and comparing the number of errors with a preset number threshold;
and a data analysis module: the data analysis module comprises artificial intelligent image analysis and data analysis, and processing image and data classification and identification;
and a data processing module: the data processing module stores and gathers the collected and analyzed images and actions, receives the collected data fed back by the internet of things equipment in real time, generates evaluation information of the internet of things equipment according to the collected data, and sends the evaluation information to the internet of things equipment;
the data processing module is electrically connected with the state determining unit and is used for acquiring working parameters of all the Internet of things equipment in real time and determining the working state of the Internet of things equipment according to the working parameters; the working state contains the occupancy rate of computing resources;
and the occupancy rate reading unit is used for marking the acquisition equipment in the model network according to the calculated resource occupancy rate and reading the calculated resource occupancy rate of each acquisition equipment.
7. The state determination unit includes:
the standard determining subunit is used for acquiring the calibration parameters of the internet of things equipment and determining the standard capacity of the internet of things equipment according to the calibration parameters;
the ratio determining subunit is used for acquiring physical parameters of the equipment of the Internet of things in real time and determining the load ratio of the equipment of the Internet of things according to the physical parameters;
the correction subunit is used for correcting the reference capacity fraction according to the load proportion to obtain capacity data of all the Internet of things equipment;
the computing subunit is used for sequentially acquiring the task quantity of each Internet of things device, reading the capacity data of the corresponding Internet of things device, and determining the computing resource occupancy rate of the Internet of things device according to the task quantity and the capacity data.
8. The data interaction system of the internet of things based on artificial intelligence according to claim 1, wherein the identification unit comprises:
action recognition unit: the human body actions are detected and classified in time dimension and space dimension, and the actions are mainly identified;
picture background recognition unit: the shot picture background is identified and analyzed, fusion with action identification is realized, and an action instruction is quickly and accurately identified by combining an action identification unit;
a camera module: the picture background recognition unit and the action recognition unit are connected with the camera module in a wireless mode, and pictures and actions are shot through the camera module.
9. The data interaction system of the internet of things based on artificial intelligence according to claim 1, wherein the equipment comprises equipment capable of being networked, such as mechanical equipment, household lamps and lanterns, electrical equipment and the like, wherein bridging between the mechanical equipment, the household lamps and the electrical equipment and the like and WAPI of a local server is firstly established, and a target position acquisition request of a target WAPI module is sent to the local server through any WAPI module;
under the condition that the random WAPI module receives a target position returned by the local server, planning a bridging transmission path according to the target position;
and sending the instruction to the target WAPI module according to the bridging transmission path.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311260501.9A CN117690000B (en) | 2023-09-27 | 2023-09-27 | Internet of things data interaction method and system based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311260501.9A CN117690000B (en) | 2023-09-27 | 2023-09-27 | Internet of things data interaction method and system based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117690000A true CN117690000A (en) | 2024-03-12 |
CN117690000B CN117690000B (en) | 2024-10-22 |
Family
ID=90137838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311260501.9A Active CN117690000B (en) | 2023-09-27 | 2023-09-27 | Internet of things data interaction method and system based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117690000B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102915111A (en) * | 2012-04-06 | 2013-02-06 | 寇传阳 | Wrist gesture control system and method |
CN103279191A (en) * | 2013-06-18 | 2013-09-04 | 北京科技大学 | 3D (three dimensional) virtual interaction method and system based on gesture recognition technology |
CN106773766A (en) * | 2016-12-31 | 2017-05-31 | 广东博意建筑设计院有限公司 | Smart home house keeper central control system and its control method with learning functionality |
CN108491069A (en) * | 2018-03-01 | 2018-09-04 | 湖南西冲智能家居有限公司 | A kind of augmented reality AR transparence display interaction systems |
CN109658928A (en) * | 2018-12-06 | 2019-04-19 | 山东大学 | A kind of home-services robot cloud multi-modal dialog method, apparatus and system |
CN112839254A (en) * | 2019-11-04 | 2021-05-25 | 海信视像科技股份有限公司 | Display apparatus and content display method |
CN113820963A (en) * | 2021-10-14 | 2021-12-21 | 深圳守正出奇科技有限公司 | Intelligent home control method and system based on Internet of things |
CN114567768A (en) * | 2022-03-09 | 2022-05-31 | 上海湃睿信息科技有限公司 | Interaction method and system based on VR technology |
CN114756115A (en) * | 2020-12-28 | 2022-07-15 | 阿里巴巴集团控股有限公司 | Interactive control method, device and device |
CN115097946A (en) * | 2022-08-15 | 2022-09-23 | 汉华智能科技(佛山)有限公司 | Remote worship method, system and storage medium based on Internet of things |
CN115512267A (en) * | 2022-09-27 | 2022-12-23 | 阿里巴巴(中国)有限公司 | Video behavior identification method, system, device and equipment |
-
2023
- 2023-09-27 CN CN202311260501.9A patent/CN117690000B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102915111A (en) * | 2012-04-06 | 2013-02-06 | 寇传阳 | Wrist gesture control system and method |
CN103279191A (en) * | 2013-06-18 | 2013-09-04 | 北京科技大学 | 3D (three dimensional) virtual interaction method and system based on gesture recognition technology |
CN106773766A (en) * | 2016-12-31 | 2017-05-31 | 广东博意建筑设计院有限公司 | Smart home house keeper central control system and its control method with learning functionality |
CN108491069A (en) * | 2018-03-01 | 2018-09-04 | 湖南西冲智能家居有限公司 | A kind of augmented reality AR transparence display interaction systems |
CN109658928A (en) * | 2018-12-06 | 2019-04-19 | 山东大学 | A kind of home-services robot cloud multi-modal dialog method, apparatus and system |
CN112839254A (en) * | 2019-11-04 | 2021-05-25 | 海信视像科技股份有限公司 | Display apparatus and content display method |
CN114756115A (en) * | 2020-12-28 | 2022-07-15 | 阿里巴巴集团控股有限公司 | Interactive control method, device and device |
CN113820963A (en) * | 2021-10-14 | 2021-12-21 | 深圳守正出奇科技有限公司 | Intelligent home control method and system based on Internet of things |
CN114567768A (en) * | 2022-03-09 | 2022-05-31 | 上海湃睿信息科技有限公司 | Interaction method and system based on VR technology |
CN115097946A (en) * | 2022-08-15 | 2022-09-23 | 汉华智能科技(佛山)有限公司 | Remote worship method, system and storage medium based on Internet of things |
CN115512267A (en) * | 2022-09-27 | 2022-12-23 | 阿里巴巴(中国)有限公司 | Video behavior identification method, system, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117690000B (en) | 2024-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103942021B (en) | Content presenting method, the method for pushing and intelligent terminal of content presentation mode | |
KR101870067B1 (en) | Methods and systems for augmented reality to display virtual representations of robotic device actions | |
CN110278383A (en) | Focus method, device and electronic equipment, storage medium | |
CN107832712A (en) | Biopsy method, device and computer-readable recording medium | |
CN109407541A (en) | The control method and device of smart home device | |
CN104640073B (en) | A kind of wifi wireless location methods and system based on reverse synchronous perception | |
KR20160039204A (en) | Identifying iot devices/objects/people using out-of-band signaling/metadata in conjunction with optical images | |
CN111935820B (en) | Positioning implementation method based on wireless network and related equipment | |
CN108063909A (en) | Video conferencing system, image trace acquisition method and device | |
KR20120043997A (en) | Location tracking system and location tracking device using signal strength of wireless signal | |
CN107948924B (en) | Calibration method, system, server and the medium of wireless signal finger print information | |
CN111503843A (en) | Air conditioner control method and device and multi-connected air conditioning system | |
CN114511622A (en) | Panoramic image acquisition method and device, electronic terminal and medium | |
CN117690000B (en) | Internet of things data interaction method and system based on artificial intelligence | |
CN109407526B (en) | Equipment detection method and device and household appliance | |
CN112672297B (en) | Indoor positioning method, server, positioning client, equipment and storage medium | |
CN115061380A (en) | Device control method and device, electronic device and readable storage medium | |
CN110361978B (en) | Intelligent equipment control method, device and system based on Internet of things operating system | |
CN109714233A (en) | A kind of appliance control method and its corresponding routing device | |
CN205490872U (en) | Camera shutter control device and system | |
CN105117254B (en) | Wireless communication module and operation method and device thereof | |
WO2019244434A1 (en) | Information processing device, information processing method, and program | |
CN110213407A (en) | A kind of operating method of electronic device, electronic device and computer storage medium | |
CN109345560A (en) | The motion tracking method for testing precision and device of augmented reality equipment | |
CN111476886B (en) | Smart building three-dimensional model rendering method and building cloud server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |