CN117492569A - AIOT (automatic in-line cloud platform) visual intelligent identification management method and AIOT cloud platform visual intelligent identification management system - Google Patents

AIOT (automatic in-line cloud platform) visual intelligent identification management method and AIOT cloud platform visual intelligent identification management system Download PDF

Info

Publication number
CN117492569A
CN117492569A CN202311554629.6A CN202311554629A CN117492569A CN 117492569 A CN117492569 A CN 117492569A CN 202311554629 A CN202311554629 A CN 202311554629A CN 117492569 A CN117492569 A CN 117492569A
Authority
CN
China
Prior art keywords
internet
information
target object
things
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311554629.6A
Other languages
Chinese (zh)
Inventor
姜世坤
张能锋
张加斌
陈广宇
杨俊�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanjiaan Interconnected Technology Co ltd
Original Assignee
Shenzhen Wanjiaan Interconnected Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wanjiaan Interconnected Technology Co ltd filed Critical Shenzhen Wanjiaan Interconnected Technology Co ltd
Priority to CN202311554629.6A priority Critical patent/CN117492569A/en
Publication of CN117492569A publication Critical patent/CN117492569A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/10Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/30Control

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Software Systems (AREA)
  • Environmental & Geological Engineering (AREA)
  • Toxicology (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an AIOT (automatic in-line cloud platform) visual intelligent identification management method and system, comprising the following steps: the method comprises the steps that an Internet of things cloud platform obtains position information of a target object, and an Internet of things display object is determined according to the position information; the method comprises the steps that an internet of things platform obtains gesture information of a target object, a moving direction of a selected object is determined according to the gesture information, a 3D virtual image of controllable internet of things equipment is displayed on a display object, a specific position of the 3D virtual image is taken as a center, and a selection cursor is sequentially moved in the 3D virtual image according to the moving direction; and the internet of things platform acquires video information of the target object, stops the movement of the selection cursor to determine the selected first internet of things equipment of the target object according to the video information as a confirmation command, acquires and identifies voice information of the target object, identifies the voice information to obtain control information, converts the control information into a control command corresponding to the first internet of things equipment, and sends the control command to the first internet of things equipment.

Description

AIOT (automatic in-line cloud platform) visual intelligent identification management method and AIOT cloud platform visual intelligent identification management system
Technical Field
The invention relates to the field of computers and Internet of things, in particular to an AIOT Internet of things cloud platform vision intelligent identification management method and system.
Background
AIoT (artificial intelligence Internet of things) integrates AI technology and IoT (Internet of Things ) technology, mass data from different dimensions are generated and collected through the Internet of things and stored at the cloud end and the edge end, and then through big data analysis and higher-form artificial intelligence, everything datamation and everything intelligent combination are achieved.
The existing AIoT cannot realize visual intelligent identification based on a platform, so that the control effect of the Internet of things is affected, and the experience of a user is affected.
Disclosure of Invention
The embodiment of the invention provides an AIOT internet of things cloud platform vision intelligent identification management method and system, which can intelligently manage internet of things equipment through vision intelligent identification and improve user experience.
In a first aspect, an embodiment of the present invention provides an AIOT internet of things cloud platform visual intelligent identification management method, where the method includes the following steps:
the method comprises the steps that an Internet of things cloud platform obtains position information of a target object, and an Internet of things display object is determined according to the position information;
the method comprises the steps that an internet of things platform obtains gesture information of a target object, a moving direction of a selected object is determined according to the gesture information, a 3D virtual image of controllable internet of things equipment is displayed on a display object, a specific position of the 3D virtual image is taken as a center, and a selection cursor is sequentially moved in the 3D virtual image according to the moving direction;
and the internet of things platform acquires video information of the target object, stops the movement of the selection cursor to determine the selected first internet of things equipment of the target object according to the video information as a confirmation command, acquires and identifies voice information of the target object, identifies the voice information to obtain control information, converts the control information into a control command corresponding to the first internet of things equipment, and sends the control command to the first internet of things equipment.
In a second aspect, an AIOT internet of things cloud platform vision intelligent identification management system is provided, the system is applied to an internet of things cloud platform, the system includes:
the acquisition unit is used for acquiring the position information of the target object and determining the display object of the Internet of things according to the position information; acquiring gesture information of a target object;
the processing unit is used for determining the moving direction of the selection object according to the gesture information, displaying a 3D virtual graph of the controllable Internet of things equipment on the display object, and sequentially moving a selection cursor in the 3D virtual graph by taking a specific position of the 3D virtual graph as a center;
the acquisition unit is also used for acquiring video information of the target object;
and the calculation control unit is used for stopping the movement of the selection cursor to determine the selected first Internet of things equipment of the target object according to the video information as the confirmation command, acquiring and identifying the voice information of the target object, identifying the voice information to obtain control information, converting the control information into a control command corresponding to the first Internet of things equipment and transmitting the control command to the first Internet of things equipment.
The embodiment of the invention has the following beneficial effects:
according to the technical scheme, the cloud platform of the Internet of things acquires the position information of a target object, and determines the display object of the Internet of things according to the position information; the method comprises the steps that an internet of things platform obtains gesture information of a target object, a moving direction of a selected object is determined according to the gesture information, a 3D virtual graph of controllable internet of things equipment is displayed on a display object, and a selection cursor is sequentially moved to the moving direction in the 3D virtual graph by taking position information as a center; and the internet of things platform acquires video information of the target object, stops the movement of the selection cursor to determine the selected first internet of things equipment of the target object according to the video information as a confirmation command, acquires and identifies voice information of the target object, identifies the voice information to obtain control information, converts the control information into a control command corresponding to the first internet of things equipment, and sends the control command to the first internet of things equipment. According to the technical scheme, the moving direction of the selection cursor is controlled through gestures, and then the first Internet of things device in the 3D virtual image is determined on the premise that a confirmation command is obtained, and then the control is obtained through a voice recognition mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an internet of things cloud platform;
fig. 2 is a schematic flow chart of an AIOT internet of things cloud platform vision intelligent identification management method provided in a specific embodiment of the present application;
fig. 3 is a schematic structural diagram of an AIOT internet of things cloud platform visual intelligent identification management system provided in a specific embodiment of the present application;
fig. 4 is a schematic diagram of a 3D virtual graph provided herein.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a schematic structural diagram of an internet of things cloud platform, as shown in fig. 1, where the internet of things cloud platform includes: the cloud server is connected with the router, and the router is connected with the plurality of internet of things devices, and the internet of things devices can be terminals of systems such as IOS and android, and can be terminals of other systems, for example, hong mo and the like. The at least one internet of things device of the internet of things devices may specifically include: the processing device, the memory, the display screen, the camera, the microphone and the communication circuit may be connected through a bus, or may be connected through other manners, which are not limited to the specific manner of connection.
With popularization of the internet of things, more and more internet of things devices are connected to markets, such as families, offices and the like, and more internet of things devices are controlled, and the control of the internet of things devices is more and more complicated, for example, families may have air conditioners, refrigerators, cameras, televisions, projectors, game machines, intelligent locks and the like, for vision management, the internet of things devices may not be locked, errors may occur, for example, the internet of things devices are locked in a voice recognition mode, for example, the air conditioners may be arranged in living rooms and a plurality of bedrooms, if the air conditioners are selected through voice, only the air conditioners can be determined, and the control of the air conditioners cannot be determined, so that inconvenience is brought to users, and a convenient implementation mode is needed to control the corresponding internet of things according to vision, so that the control is convenient for users.
Referring to fig. 2, fig. 2 is a flow chart of an AIOT internet of things cloud platform visual intelligent identification management method provided in an embodiment of the present application, where the method shown in fig. 2 is executed on the internet of things platform shown in fig. 1, and the method is shown in fig. 2, and includes the following steps:
step S201, the cloud platform of the Internet of things acquires the position information of a target object, and determines the display object of the Internet of things according to the position information;
for example, the above-mentioned location information may be obtained in various manners, for example, by obtaining corresponding location information through video, where the location information is specifically the location information of the 3D virtual map range, and of course, the location information may also be the location information of a non-3D virtual map range, and when the location information is outside the 3D virtual map range, it is required to have a location where a device connected to the internet of things is required to collect a target object, for example, inside a vehicle, and switch on a mobile phone, where the location information does not refer to GPS coordinates or beidou coordinates alone, and the GPS coordinates or the beidou coordinates may be used to assist in determining the location information.
The following describes a practical scheme for confirming the above-described position information.
The obtaining the position information of the target object, and determining the display object of the internet of things according to the position information may specifically include:
and acquiring the position coordinates of the target object, acquiring the ID of the Internet of things equipment for acquiring the video information of the target object when the target object is determined to be a non-specific area (such as a home area or an office area) according to the position coordinates, and determining the Internet of things equipment corresponding to the ID as a display object.
For example, the obtaining the location information of the target object, and determining, according to the location information, the display object of the internet of things may specifically include:
and acquiring position coordinates of a target object, acquiring video information of the acquired target object when the target object is determined to be a specific area (such as a home area or an office area) according to the position coordinates, performing object identification on the video information to determine the object name in the video information, determining the position information in the specific area according to the object name, and selecting the Internet of things equipment corresponding to the position information as a display object in the specific area.
For example, the above-mentioned manner of identifying and determining the object name in the video information may be an existing identification manner, which is not limited in this application.
Specifically, for example, when the object name in the video information is identified as a sofa, the position of the sofa is determined as the position information, and a display object corresponding to the sofa, such as a television or a projection display, is determined according to the mapping relationship between the position information and the display device. And if the object name in the video information is identified as the bed, determining the position of the bed as the position information, and determining the display object corresponding to the bed according to the mapping relation between the position information and the display equipment.
Step S202, the platform of the Internet of things acquires gesture information of a target object, determines the moving direction of a selected object according to the gesture information, displays a 3D virtual image of controllable equipment of the Internet of things on a display object, and sequentially moves a selection cursor in the 3D virtual image by taking a specific position of the 3D virtual image as a center;
the selection cursor may be sequentially moved in the 3D virtual image in the moving direction in various manners, for example, left movement is taken as an example, the selection cursor is sequentially moved in the 3D virtual image in the left direction in the center, if the selection cursor is moved to the leftmost side, the selection cursor is moved in a clockwise or counterclockwise rotation manner until all the devices of the internet of things are moved, and other moving manners are naturally also possible, which are not limited herein, and only the large direction needs to be ensured to be the moving direction corresponding to the gesture information.
The 3D virtual graph may be confirmed by the identity of the target object (the identity may be confirmed by face recognition, fingerprint recognition or other biological recognition modes), because the 3D virtual graph is generally an office or a home, the number of the devices of the internet of things at these positions is relatively fixed, and the positions are also relatively fixed, so that a mapping relationship (may be one-to-one mapping or one-to-two mapping) between the set identity and the 3D virtual graph may be preset, when the mapping is one-to-two mapping, the specific 3D virtual graph may be determined by the position coordinates, if the position coordinates are close to the home, the 3D virtual graph of the home is determined, and if the position coordinates are close to the company, the 3D virtual graph is determined to be the 3D virtual graph of the company.
By way of example, the specific locations of the 3D virtual graph (as shown in fig. 4) may specifically include:
if the position information belongs to a specific area, the position information is determined as a specific position, such as a bed position, a sofa position, etc., and if the position information does not belong to a specific area (i.e., a non-home or office area), a preset position, such as a gate position, a sofa position, etc., of the 3D virtual map is acquired as a specific position.
By way of example, the gesture information described above may be recognized in a variety of ways, such as using ultrasound to recognize a gesture, and such as using video to recognize a gesture.
Taking an ultrasonic gesture as an example, the method specifically may include:
ultrasonic waves also belong to a type of sound wave which has some characteristics of sound waves, such as a velocity of around 340m/s, for example the frequency of the ultrasonic wave is related to the generating source, which does not change frequency due to reflection. Experiments show that the sum of the emission distance and the reflection distance is shortest when the reflecting object is positioned at the central position, and the sum of the emission distance and the reflection distance is shortest when the position of the reflecting object is positioned at the central point of the connecting line of the ultrasonic transmitter and the ultrasonic receiver if the distance between the reflecting object and the intelligent mobile phone is unchanged through multiple experimental analysis. If the sum of the emission distance and the emission distance of the center position is shortest, the sum is reflected on the parameter of ultrasonic wave, namely, the difference between the emission time and the receiving time is the least, and the closer the sum is to the center position, the closer the corresponding difference between the emission time and the receiving time is to the shortest time difference (namely, the difference between the emission time and the receiving time of the center position), otherwise, the larger the corresponding difference between the emission time and the receiving time is, the farther the sum is from the center position, based on the principle that the waving of an emitter (hand) can be determined, then the waving direction can be determined according to the reflection intensities, and the moving direction can be further confirmed.
For example, for position a (center position to the left), its corresponding path is L1, and the path L1 is divided into La-1, la-2; for position c, its corresponding path is Lc, which is divided into Lc-1, lc-2; it is found by observation that La-1 is less than Lc-1, la-2 is a reflection path, la-1 is a transmission path, lc-1 is a reflection path, lc-2 is a reflection path, and the reflected signal intensity is far lower than the transmitted signal intensity according to the principle of ultrasonic reflection, the attenuation ratio per unit distance is the same, fa is assumed as the frequency of ultrasonic wave transmission for position a, fc is assumed as the frequency of ultrasonic wave transmission for position c, la-1 is smaller as the path of ultrasonic wave transmission to position a, a is greater than c as the intensity of ultrasonic wave transmission to position a is, and the reflection attenuation coefficient is also uniform because of the same reflector (hand), the initial intensity of La-2 after reflection is greater than the initial intensity of Lc-2 after transmission, and the reflected signal intensity value of fa is smaller than that of fc because the difference of initial intensity is greater than La-2.
Step S203, the Internet of things platform obtains video information of the target object, stops the movement of the selection cursor to determine the selected first Internet of things device of the target object according to the video information as the confirmation command, obtains and identifies voice information of the target object, identifies the voice information to obtain control information, converts the control information into a control command corresponding to the first Internet of things device, and sends the control command to the first Internet of things device.
For example, the method for determining the command according to the video information may specifically include:
and forming the video information into input data, inputting the input data into a neural network model to calculate an output result, and determining the video information as a confirmation command if the output result determines that the video information contains nodding information, otherwise determining the video information as a non-confirmation command. The method of inputting the input data into the neural network model to calculate the output result may be an existing calculation method, and the method of determining that the video information includes nodding information by using the output result may also be an existing method.
The speech recognition method may be a general speech recognition method, for example, chatGPT, AI speech recognition, etc., and the present application is not limited to the specific speech recognition method.
According to the technical scheme, the cloud platform of the Internet of things acquires the position information of a target object, and determines the display object of the Internet of things according to the position information; the method comprises the steps that an internet of things platform obtains gesture information of a target object, a moving direction of a selected object is determined according to the gesture information, a 3D virtual graph of controllable internet of things equipment is displayed on a display object, and a selection cursor is sequentially moved to the moving direction in the 3D virtual graph by taking position information as a center; and the internet of things platform acquires video information of the target object, stops the movement of the selection cursor to determine the selected first internet of things equipment of the target object according to the video information as a confirmation command, acquires and identifies voice information of the target object, identifies the voice information to obtain control information, converts the control information into a control command corresponding to the first internet of things equipment, and sends the control command to the first internet of things equipment. According to the technical scheme, the moving direction of the selection cursor is controlled through gestures, and then the first Internet of things device in the 3D virtual image is determined on the premise that a confirmation command is obtained, and then the control is obtained through a voice recognition mode.
For example, the moving the selection cursor in the 3D virtual graph sequentially in the moving direction may specifically include:
acquiring a first identity of a target object, acquiring a current first period, extracting all history information of the first identity in the first period, wherein the history information comprises the operated internet of things equipment, determining all the internet of things equipment contained in the history information as a movement range of a selection cursor, and sequentially moving the selection cursor in the movement range of a 3D virtual graph according to the movement direction.
The method for obtaining the first identity of the target object may specifically include:
a1, extracting a center point of fingerprint picture information, wherein the extracting mode of the center point specifically comprises the following steps: the method comprises the steps of constructing a rectangular frame containing fingerprint pixel points, connecting four end points of the rectangular frame with two diagonal lines, wherein the intersection point of the diagonal lines is a center point, and determining the center point in other modes can be adopted in practical application.
A2, constructing m concentric circles by taking the central point as a circle center, wherein the radius of the m concentric circles is m preset values;
the m may be any of 3, 4, or 5.
A3, extracting y intersection points between circumferences of m concentric circles and fingerprint lines;
a4, calculating the distances between the y intersection points and the circle center to obtain y distances, and arranging the y distances in a descending order to obtain a first vector;
a5, constructing m concentric circles in the first preset picture, extracting z1 intersection points of the first preset picture, if z1=y, calculating the distance between the z1 intersection points and the circle center in the first preset picture to obtain z1 distances, arranging the z1 distances to obtain a second vector, calculating the difference between the first vector and the second vector to obtain a first difference value, and if the first difference value is smaller than a first threshold value, determining the first identity of the target object to be the identity corresponding to the first preset picture.
For example, the method may further include:
if z1 is not equal to y, extracting the subsequent preset pictures until the number zi=y of intersection points of the ith preset picture is determined, the difference value between the (i+1) th vector and the first vector is smaller than a first threshold value, and determining the identity corresponding to the ith preset picture as the first identity of the target object.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an AIOT internet of things cloud platform vision intelligent identification management system provided in the present application, where the system is applied to an internet of things cloud platform, and the system includes:
an obtaining unit 301, configured to obtain location information of a target object, and determine an internet of things display object according to the location information; acquiring gesture information of a target object;
the processing unit 302 is configured to determine a movement direction of the selection object according to the gesture information, display a 3D virtual image of the controllable internet of things device on the display object, and sequentially move the selection cursor in the 3D virtual image with the movement direction centered on a specific position of the 3D virtual image;
the acquiring unit 301 is further configured to acquire video information of a target object;
and the calculation control unit 303 is configured to stop the movement of the selection cursor to determine the selected first internet of things device of the target object when the video information is the confirmation command, acquire and identify the voice information of the target object, identify the voice information to obtain control information, convert the control information into a control command corresponding to the first internet of things device, and send the control command to the first internet of things device.
By way of example only, the present invention is directed to a method of,
the processing unit is specifically configured to determine a first identity of the target object, query a first mapping relationship from a mapping relationship between the identity and the 3D virtual graph according to the first identity, determine that the 3D virtual graph is a 3D virtual graph uniquely corresponding to the first mapping relationship if the first mapping relationship is one-to-one mapping, obtain a first coordinate of the target object if the first mapping relationship is one-to-two mapping, and obtain a 3D virtual graph closest to the first coordinate from the first mapping relationship to determine that the 3D virtual graph is the obtained 3D virtual graph.
By way of example only, the present invention is directed to a method of,
the acquiring unit is specifically configured to acquire a position coordinate of a target object, acquire an ID of an internet of things device that acquires video information of the target object when determining that the target object is a non-specific area according to the position coordinate, and determine the internet of things device corresponding to the ID as a display object.
Optionally, the refinement of the method in the method embodiment shown in fig. 2 may also be performed by each unit of the AIOT internet of things cloud platform visual intelligent recognition management system, which is not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The foregoing has outlined rather broadly the more detailed description of embodiments of the invention, wherein the principles and embodiments of the invention are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. The AIOT internet of things cloud platform vision intelligent identification management method is characterized by comprising the following steps of:
the method comprises the steps that an Internet of things cloud platform obtains position information of a target object, and an Internet of things display object is determined according to the position information;
the method comprises the steps that an internet of things platform obtains gesture information of a target object, a moving direction of a selected object is determined according to the gesture information, a 3D virtual image of controllable internet of things equipment is displayed on a display object, a specific position of the 3D virtual image is taken as a center, and a selection cursor is sequentially moved in the 3D virtual image according to the moving direction;
and the internet of things platform acquires video information of the target object, stops the movement of the selection cursor to determine the selected first internet of things equipment of the target object according to the video information as a confirmation command, acquires and identifies voice information of the target object, identifies the voice information to obtain control information, converts the control information into a control command corresponding to the first internet of things equipment, and sends the control command to the first internet of things equipment.
2. The method according to claim 1, wherein the manner of obtaining the 3D virtual map specifically includes:
determining a first identity of a target object, inquiring a first mapping relation from mapping relation between the identity and a 3D virtual image according to the first identity, if the first mapping relation is one-to-one mapping, determining that the 3D virtual image is a 3D virtual image uniquely corresponding to the first mapping relation, if the first mapping relation is one-to-two mapping, acquiring a first coordinate of the target object, acquiring one 3D virtual image closest to the first coordinate from the first mapping relation, and determining the 3D virtual image as an acquired 3D virtual image.
3. The method of claim 1, wherein obtaining location information of the target object, and determining the internet of things display object according to the location information specifically comprises:
and acquiring the position coordinates of the target object, acquiring the ID of the Internet of things equipment which acquires the video information of the target object when the target object is determined to be a non-specific area according to the position coordinates, and determining the Internet of things equipment corresponding to the ID as a display object.
4. The method of claim 1, wherein obtaining location information of the target object, and determining the internet of things display object according to the location information specifically comprises:
and acquiring the position coordinates of the target object, acquiring the video information of the acquired target object when the target object is determined to be a specific area according to the position coordinates, performing object identification on the video information to determine the object name in the video information, determining the position information in the specific area according to the object name, and selecting the Internet of things equipment corresponding to the position information as a display object in the specific area.
5. The method according to claim 1, wherein the specific location of the 3D virtual map specifically comprises:
if the position information belongs to the specific area, the position information is determined to be the specific position, and if the position information does not belong to the specific area, the preset position of the 3D virtual map is obtained to be the specific position.
6. The method of claim 1, wherein the determining the command based on the video information comprises:
and forming the video information into input data, inputting the input data into a neural network model to calculate an output result, and determining the video information as a confirmation command if the output result determines that the video information contains nodding information, otherwise determining the video information as a non-confirmation command.
7. The method according to claim 1, wherein sequentially moving the selection cursor in the 3D virtual map in the moving direction specifically includes:
acquiring a first identity of a target object, acquiring a current first period, extracting all history information of the first identity in the first period, wherein the history information comprises the operated internet of things equipment, determining all the internet of things equipment contained in the history information as a movement range of a selection cursor, and sequentially moving the selection cursor in the movement range of a 3D virtual graph according to the movement direction.
8. An AIOT thing allies oneself with cloud platform vision intelligent identification management system, its characterized in that, the system is applied to thing networking cloud platform, the system includes:
the acquisition unit is used for acquiring the position information of the target object and determining the display object of the Internet of things according to the position information; acquiring gesture information of a target object;
the processing unit is used for determining the moving direction of the selection object according to the gesture information, displaying a 3D virtual graph of the controllable Internet of things equipment on the display object, and sequentially moving a selection cursor in the 3D virtual graph by taking a specific position of the 3D virtual graph as a center;
the acquisition unit is also used for acquiring video information of the target object;
and the calculation control unit is used for stopping the movement of the selection cursor to determine the selected first Internet of things equipment of the target object according to the video information as the confirmation command, acquiring and identifying the voice information of the target object, identifying the voice information to obtain control information, converting the control information into a control command corresponding to the first Internet of things equipment and transmitting the control command to the first Internet of things equipment.
9. The system of claim 8, wherein the system further comprises a controller configured to control the controller,
the processing unit is specifically configured to determine a first identity of the target object, query a first mapping relationship from a mapping relationship between the identity and the 3D virtual graph according to the first identity, determine that the 3D virtual graph is a 3D virtual graph uniquely corresponding to the first mapping relationship if the first mapping relationship is one-to-one mapping, obtain a first coordinate of the target object if the first mapping relationship is one-to-two mapping, and obtain a 3D virtual graph closest to the first coordinate from the first mapping relationship to determine that the 3D virtual graph is the obtained 3D virtual graph.
10. The system of claim 8, wherein the system further comprises a controller configured to control the controller,
the acquiring unit is specifically configured to acquire a position coordinate of a target object, acquire an ID of an internet of things device that acquires video information of the target object when determining that the target object is a non-specific area according to the position coordinate, and determine the internet of things device corresponding to the ID as a display object.
CN202311554629.6A 2023-11-20 2023-11-20 AIOT (automatic in-line cloud platform) visual intelligent identification management method and AIOT cloud platform visual intelligent identification management system Pending CN117492569A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311554629.6A CN117492569A (en) 2023-11-20 2023-11-20 AIOT (automatic in-line cloud platform) visual intelligent identification management method and AIOT cloud platform visual intelligent identification management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311554629.6A CN117492569A (en) 2023-11-20 2023-11-20 AIOT (automatic in-line cloud platform) visual intelligent identification management method and AIOT cloud platform visual intelligent identification management system

Publications (1)

Publication Number Publication Date
CN117492569A true CN117492569A (en) 2024-02-02

Family

ID=89681183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311554629.6A Pending CN117492569A (en) 2023-11-20 2023-11-20 AIOT (automatic in-line cloud platform) visual intelligent identification management method and AIOT cloud platform visual intelligent identification management system

Country Status (1)

Country Link
CN (1) CN117492569A (en)

Similar Documents

Publication Publication Date Title
US10671846B1 (en) Object recognition techniques
CN113284240B (en) Map construction method and device, electronic equipment and storage medium
WO2020119684A1 (en) 3d navigation semantic map update method, apparatus and device
US10262230B1 (en) Object detection and identification
US9201499B1 (en) Object tracking in a 3-dimensional environment
CN109313810A (en) System and method for being surveyed and drawn to environment
JP2022548441A (en) POSITION AND ATTITUDE DETERMINATION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
CN110278383A (en) Focus method, device and electronic equipment, storage medium
EP3427233B1 (en) Method and apparatus for providing augmented reality services
CN105306868A (en) Video conferencing system and method
US11595615B2 (en) Conference device, method of controlling conference device, and computer storage medium
CN112598780B (en) Instance object model construction method and device, readable medium and electronic equipment
CN114494487B (en) House type graph generation method, device and storage medium based on panorama semantic stitching
CN112270754A (en) Local grid map construction method and device, readable medium and electronic equipment
CN111061363A (en) Virtual reality system
US9390500B1 (en) Pointing finger detection
KR20170066054A (en) Method and apparatus for providing audio
US9304582B1 (en) Object-based color detection and correction
CA2979271A1 (en) Wayfinding and obstacle avoidance system
US9558563B1 (en) Determining time-of-fight measurement parameters
US10126820B1 (en) Open and closed hand detection
CN113835352B (en) Intelligent device control method, system, electronic device and storage medium
CN104076949A (en) Laser pointer beam synchronization method and related equipment and system
EP4030790A1 (en) Method and apparatus for generating semantic map, and readable storage medium
EP2888716B1 (en) Target object angle determination using multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination