CN114743240A - Internet of things ultra-low power consumption application communication management method and related equipment - Google Patents

Internet of things ultra-low power consumption application communication management method and related equipment Download PDF

Info

Publication number
CN114743240A
CN114743240A CN202210323413.8A CN202210323413A CN114743240A CN 114743240 A CN114743240 A CN 114743240A CN 202210323413 A CN202210323413 A CN 202210323413A CN 114743240 A CN114743240 A CN 114743240A
Authority
CN
China
Prior art keywords
user
information
input
expression
facial feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210323413.8A
Other languages
Chinese (zh)
Inventor
肖垚
蒋驰
王旸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mingyang Industrial Technology Research Institute Shenyang Co ltd
Original Assignee
Mingyang Industrial Technology Research Institute Shenyang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mingyang Industrial Technology Research Institute Shenyang Co ltd filed Critical Mingyang Industrial Technology Research Institute Shenyang Co ltd
Priority to CN202210323413.8A priority Critical patent/CN114743240A/en
Publication of CN114743240A publication Critical patent/CN114743240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application provides an Internet of things ultra-low power consumption application communication management method and related equipment, and the problem that a certain privacy exposure risk exists in an intelligent terminal and hidden danger is caused to privacy safety of a user can be solved. The method comprises the following steps: under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain the current distance between the terminal and the face of a user; determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user; and displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before privacy processing to a target receiving end.

Description

Internet of things ultra-low power consumption application communication management method and related equipment
Technical Field
The application relates to the technical field of big data, in particular to an ultra-low power consumption application communication management method and related equipment of the Internet of things.
Background
The intelligent terminal is a kind of embedded computer system equipment, so its architecture framework is identical with embedded system architecture, at the same time, the intelligent terminal is used as an application direction of the embedded system, its application scene setting is more definite, so its architecture is more definite than the structure of the ordinary embedded system, the granularity is finer, and has some self characteristics. People learn and live through intelligent terminal, and have become more dependent on intelligent terminal, but intelligent terminal has certain privacy exposure risk in daily use, causes the hidden danger to user's privacy security.
Disclosure of Invention
The embodiment of the application provides an Internet of things ultra-low power consumption application communication management method and related equipment, and the problem that a certain privacy exposure risk exists in an intelligent terminal and hidden danger is caused to privacy safety of a user can be solved.
A first aspect of the embodiments of the present application provides a method for managing communication of an internet of things with ultra-low power consumption application, including:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain the current distance between the terminal and the face of a user;
determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user;
and displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before privacy processing to a target receiving end.
Optionally, the user habit distance is the habit distance of the user to which the target client belongs or the habit distance of all the users recorded by the cloud.
Optionally, the privacy processing the content input to the target client based on the privacy risk input state includes:
performing type analysis on the content input to the target client based on the privacy risk input state;
and under the condition that the type of the content of the target client is a preset type, carrying out blurring or character occupation display only or preset identification substitution processing on the content of the target client, then displaying the content, and sending the content before privacy processing to a target receiving end.
Optionally, the method further includes:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain current first facial feature information of a user, wherein the target client is an input client and the facial feature information is used for indicating the expression state of the user;
generating a candidate input expression list based on the current input information of the user and the first facial feature information;
and displaying the candidate input expression list for the user to select.
Optionally, the method further comprises:
under the condition of displaying the candidate input expression list, acquiring current second facial feature information of the user;
determining whether to update the candidate input emoticon list based on the second facial feature information.
Optionally, the method further comprises:
acquiring candidate input expression sample information comprising historical input information, historical facial feature information and selection information of a user based on a historical candidate input expression list;
training a neural network model through the candidate input expression sample information to obtain the candidate input expression analysis model;
generating a candidate input expression list based on the current input information of the user and the first facial feature information, wherein the candidate input expression list comprises:
and generating a candidate input expression list through the candidate input expression analysis model based on the current input information of the user and the first facial feature information.
Optionally, the candidate input emoticon list includes first user expression information,
the first user expression information comprises user facial image data to which the first facial feature information belongs and current input information of the user; or the like, or, alternatively,
the first user expression information includes user facial image data to which the first facial feature information belongs and text information corresponding to the first facial feature information.
Optionally, the candidate input emoticon list includes second user profile information,
the second user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal; or the like, or, alternatively,
the second user expression information comprises user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal.
Optionally, the candidate input emoticon list includes third user expression information,
the third user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and second scene image information, and the second scene image information is generated based on the position information of the terminal; or the like, or a combination thereof,
the third user expression information includes user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information, and second scene image information generated based on location information of the terminal.
Optionally, the method further includes:
and acquiring message data of the target user of the user in the target client, and generating the third user expression information under the condition that the message data comprises the position inquiry information.
A second aspect of the embodiments of the present application provides an internet of things ultra-low power consumption application communication management device, including:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for starting a front-facing imaging device of a terminal to which a target client belongs to acquire current first facial feature information of a user under the condition that the target client is detected to be in an information input state, the target client is an input client, and the facial feature information is used for indicating the expression state of the user;
a generating unit, configured to generate a candidate input expression list based on the current input information of the user and the first facial feature information;
and the display unit is used for displaying the candidate input expression list for the user to select.
A third aspect of the embodiments of the present application provides an electronic device, which includes a memory and a processor, where the processor is configured to implement the steps of the communication management method for an internet of things with ultra-low power consumption when executing a computer program stored in the memory.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for managing communication of an application with ultra-low power consumption in the internet of things.
In summary, according to the internet of things ultra-low power consumption application communication management method provided by the embodiment of the application, under the condition that a target client is detected to be in an information input state, a front-facing imaging device of a terminal to which the target client belongs is started to obtain the current distance between the terminal and the face of a user; determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user; and displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before privacy processing to a target receiving end. Due to the fact that the preset distance range is set according to the historical data or the big data, under the condition that the current distance between the terminal and the face of the user is too far or too close, whether the user finds that privacy exposure risks exist currently or not is predicted, and the current distance between the terminal and the face of the user is adjusted manually. Under the condition that a user is predicted to be far away from or close to the face of the user due to privacy exposure risks, desensitization treatment can be carried out on content input and displayed in the intelligent terminal, privacy disclosure is avoided, on the other hand, the content input by the user normally occurs to a corresponding target receiving end, and the situation that the target receiving end cannot normally receive information content which is supposed to be sent due to privacy protection is avoided.
Correspondingly, the internet of things ultra-low power consumption application communication management device, the electronic equipment and the computer readable storage medium provided by the embodiment of the invention also have the technical effects.
Drawings
Fig. 1 is a schematic flowchart of a possible communication management method for an internet of things ultra-low power application according to an embodiment of the present disclosure;
fig. 2 is a schematic structural block diagram of a possible communication management device for an internet of things with ultra-low power consumption application according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a hardware structure of a possible communication management device for an internet of things with ultra-low power consumption application according to an embodiment of the present disclosure;
fig. 4 is a schematic structural block diagram of a possible electronic device provided in an embodiment of the present application;
fig. 5 is a schematic structural block diagram of a possible computer-readable storage medium provided in an embodiment of the present application.
Detailed Description
The embodiment of the application communication management method and the related equipment for the ultra-low power consumption of the Internet of things can solve the problems that the content which needs to be expressed currently by a user cannot be accurately expressed only by using the text input by the user as the basis for generating the expression to be selected, and the user inputs the complete text to match, so that the expression generation efficiency is low, and the user information is not fully utilized.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
Referring to fig. 1, a flowchart of an internet of things ultra-low power consumption application communication management method provided in an embodiment of the present application may specifically include: S110-S130.
S110, under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain the current distance between the terminal and the face of a user;
illustratively, the terminal may be a smart terminal with a display panel, such as a mobile phone and a tablet computer.
S120, determining that the current input state is a privacy risk input state based on the condition that the current distance exceeds a preset distance range of the habit distance of the user;
for example, a user may have a normal use distance when using a smart terminal with a display panel, such as a mobile phone or a tablet computer. This usage distance is more in line with the usage habits of the user. The preset distance range may be determined according to the conventional use distance described above.
For example, in a case that a user finds that there is a privacy exposure risk currently, the current distance between the terminal and the face of the user is usually adjusted manually, for example, the screen is lifted to be close to the face of the user, so as to avoid the viewing of people around the screen, or the screen is lowered or kept away from other shelters, so that the screen can be seen while ensuring that the line of sight of the user can be seen, so as to avoid the situation that other people around the user can also see the screen, so that privacy is revealed. Therefore, whether the current input state is the privacy risk input state can be judged according to whether the current distance exceeds the preset distance range of the habit distance of the user. The privacy risk input state described above may be understood as an input state in which there is a privacy exposure risk.
S130, based on the privacy risk input state, displaying the content input to the target client after carrying out privacy processing, and sending the content before the privacy processing to a target receiving end.
Illustratively, desensitization processing can be performed on content input and displayed in the intelligent terminal, so that privacy disclosure is avoided, and on the other hand, the content input by a user is normally sent to a corresponding target receiving terminal, so that the target receiving terminal cannot normally receive information content which should be sent due to privacy protection.
According to the method for managing the ultra-low power consumption application communication of the Internet of things, under the condition that a target client is detected to be in an information input state, a front-facing imaging device of a terminal to which the target client belongs is started to obtain the current distance between the terminal and the face of a user; determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user; and displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before privacy processing to a target receiving end. Due to the fact that the preset distance range is set according to the historical data or the big data, under the condition that the current distance between the terminal and the face of the user is too far or too close, whether the user finds that privacy exposure risks exist currently or not is predicted, and the current distance between the terminal and the face of the user is adjusted manually. Under the condition that a user is predicted to be far away from or close to the face of the user due to privacy exposure risks, desensitization treatment can be carried out on content input and displayed in the intelligent terminal, privacy disclosure is avoided, on the other hand, the content input by the user normally occurs to a corresponding target receiving end, and the situation that the target receiving end cannot normally receive information content which is supposed to be sent due to privacy protection is avoided.
According to some embodiments, the user habit distance is a habit distance of a user to which the target client belongs or habit distances of all users recorded by the cloud.
According to some embodiments, the privacy processing of the content input to the target client based on the privacy risk input state comprises:
performing type analysis on the content input to the target client based on the privacy risk input state;
and under the condition that the type of the content of the target client is a preset type, blurring the content of the target client or displaying the character occupation or replacing the preset identification, displaying the content, and sending the content before privacy processing to a target receiving end.
It is understood that an input method editor is software that implements text input, also known as input method software, an input method platform, an input method framework, or an input method system. In China, input method software is directly abbreviated as an input method, and the requirement of a user on the input method is higher and higher, at present, in order to meet the input requirement of the user, some input method software can automatically match corresponding expressions by identifying texts input by the user, the user can select the expressions matched by the input method and key the expressions into input contents to assist in expressing the contents which the user wants to express, but the expression assist prompt in the input process at present only can be used as the basis for generating the expressions to be selected by inputting texts by the user, the contents which the user needs to express at present cannot be accurately expressed, and the user can carry out matching only by inputting complete texts, so that the expression generation efficiency is low, and user information is not sufficiently good.
According to some embodiments, to solve the above problem, the method for communication management of an internet of things ultra-low power application may further include:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to acquire current first facial feature information of a user, wherein the target client is an input client and the facial feature information is used for indicating the expression state of the user.
For example, the target client may be an input method client, or an information publishing client or an information transmitting client that inputs information by using an input method application.
For example, the first facial feature information may be generated based on user facial image data acquired by the front-end imaging device in real time. The facial feature information may include a combination of feature information of the facial feature state of the user and the facial contour state of the user, or may be an expression state of the user analyzed based on the combination of the feature information, such as happiness, uneasiness, anger, fear, and the like.
And generating a candidate input expression list based on the current input information of the user and the first facial feature information.
For example, the current input information of the user may not be complete information of the user input content.
And displaying the candidate input expression list for the user to select.
For example, the candidate input expression list may include the current input information of the user and the first facial feature information, and particularly, the expression of joy or awful or angry or fear obtained by the first facial feature information.
According to the internet of things ultra-low power consumption application communication management method provided by the embodiment, under the condition that a target client is detected to be in an information input state, a front-facing imaging device of a terminal to which the target client belongs is started to obtain current first facial feature information of a user, wherein the target client is an input client, and the facial feature information is used for indicating the expression state of the user; generating a candidate input expression list based on the current input information of the user and the first facial feature information; and displaying the candidate input expression list for the user to select. The terminal device can automatically acquire the first facial feature information of the user through the front-end imaging device under the condition that the user inputs information, and the first facial feature information is used as a generation basis for generating the expression list for auxiliary input, so that the state of the current user can be reflected better, the generated expression can reflect the content and emotion which the user wants to express at present more accurately, the generation basis can be provided for the generation of the candidate input expression list more conveniently, and the situation that the user can generate the corresponding candidate input expression list only after inputting complete content is avoided. The method and the device solve the problems that the content which needs to be expressed currently by a user cannot be accurately expressed in the existing scheme, the matching can be carried out only by inputting complete texts by the user, the expression generation efficiency is low, and the user information is not fully utilized.
According to some embodiments, further comprising:
under the condition of displaying the candidate input expression list, acquiring current second facial feature information of the user;
determining whether to update the candidate input expression list based on the second facial feature information.
For example, the second facial feature information may be facial feature information after the user sees the displayed candidate input expression list, for example, expressions included in the candidate input expression list obtained by the method are all angry expressions, and after the display makes the user see, the current user state represented by the obtained current second facial feature information of the user is a state that is not approved, denied or confused, so that the client can more conveniently know whether the user agrees with the currently matched candidate input expression list, and further verify the current user state. Thereby determining whether to update the candidate input emoticon list. The content which the user needs to express currently can be expressed more accurately.
According to some embodiments, further comprising:
acquiring candidate input expression sample information comprising historical input information, historical facial feature information and selection information of a user based on a historical candidate input expression list;
training a neural network model through the candidate input expression sample information to obtain the candidate input expression analysis model;
generating a candidate input expression list based on the current input information of the user and the first facial feature information, wherein the candidate input expression list comprises:
and generating a candidate input expression list through the candidate input expression analysis model based on the current input information of the user and the first facial feature information.
According to some embodiments, the candidate input emoticon list includes first user expression information,
the first user expression information comprises user facial image data to which the first facial feature information belongs and current input information of the user; or the like, or, alternatively,
the first user expression information includes user facial image data to which the first facial feature information belongs and text information corresponding to the first facial feature information.
For example, the candidate input expression list user expression information may be not only a general expression generated according to big data matching, but also a personalized expression including user facial image data.
According to some embodiments, the candidate input emoticon list includes second user expression information,
the second user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal; or the like, or a combination thereof,
the second user expression information comprises user face image data to which the first face feature information belongs, text information corresponding to the first face feature information and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal.
For example, the expression information of the candidate input expression list user may be not only a common expression and a scene generated according to big data matching, but also an expression including the current real-time environment characteristics of the user. For example, the imaging device acquires that the user is in an office environment, and an office scene can be added to the generated expression. Alternatively, the imaging device acquires that the user is at home, and a home scene may be added to the generated expression.
According to some embodiments, the list of candidate input emotions includes third user emotion information,
the third user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and second scene image information, and the second scene image information is generated based on the position information of the terminal; or the like, or a combination thereof,
the third user expression information includes user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information, and second scene image information generated based on location information of the terminal.
For example, the expression information of the candidate input expression list user may be not only a common expression and a scene generated according to big data matching, but also an expression including the current real-time position feature of the user. For example, the current position of the terminal is at the beach, and a beach scene or position may be added to the generated expression. Alternatively, the imaging device acquires that the user is at a hospital, and a hospital scene or position may be added to the generated expression.
According to some embodiments, further comprising:
and acquiring message data of the target user of the user in the target client, and generating the third user expression information under the condition that the message data comprises the position inquiry information.
The system resource management method in the embodiment of the present application is described above, and the ultra-low power consumption application communication management device of the internet of things in the embodiment of the present application is described below.
Referring to fig. 2, an embodiment of an internet of things ultra-low power consumption application communication management apparatus is described in the embodiment of the present application, and the embodiment may include:
the system comprises a detection unit 201, a processing unit and a processing unit, wherein the detection unit is used for starting a front-facing imaging device of a terminal to which a target client belongs to acquire the current distance between the terminal and the face of a user under the condition that the target client is detected to be in an information input state;
a 202 determining unit, configured to determine that the current input state is a privacy risk input state based on a condition that the current distance exceeds a preset distance range of a user habit distance;
and the display unit 203 is used for displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before the privacy processing to a target receiving end.
According to the internet of things ultra-low power consumption application communication management device provided by the embodiment, under the condition that the target client is detected to be in the information input state, the front-facing imaging equipment of the terminal to which the target client belongs is started to obtain the current distance between the terminal and the face of the user; determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user; and displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before privacy processing to a target receiving end. Due to the fact that the preset distance range is set according to the historical data or the big data, under the condition that the current distance between the terminal and the face of the user is too far or too close, whether the user finds that privacy exposure risks exist currently or not is predicted, and the current distance between the terminal and the face of the user is adjusted manually. Under the condition that a user is predicted to be far away from or close to the face of the user due to privacy exposure risks, desensitization treatment can be carried out on content input and displayed in the intelligent terminal, privacy disclosure is avoided, on the other hand, the content input by the user normally occurs to a corresponding target receiving end, and the situation that the target receiving end cannot normally receive information content which is supposed to be sent due to privacy protection is avoided.
Fig. 2 above describes the communication management apparatus for ultra-low power application of the internet of things in the embodiment of the present application from the perspective of a modular functional entity, and the following describes the system resource management apparatus in the embodiment of the present application in detail from the perspective of hardware processing, referring to fig. 3, an embodiment of an communication management apparatus 300 for ultra-low power application of the internet of things in the embodiment of the present application includes:
an input device 301, an output device 302, a processor 303 and a memory 304, wherein the number of the processor 303 may be one or more, and one processor 303 is taken as an example in fig. 3. In some embodiments of the present application, the input device 301, the output device 302, the processor 303 and the memory 304 may be connected by a bus or other means, wherein fig. 3 illustrates the connection by the bus.
Wherein, by calling the operation instruction stored in the memory 304, the processor 303 is configured to perform the following steps:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain the current distance between the terminal and the face of a user;
determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user;
and displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before privacy processing to a target receiving end.
Optionally, the user habit distance is the habit distance of the user to which the target client belongs or the habit distance of all users recorded by the cloud.
Optionally, the privacy processing the content input to the target client based on the privacy risk input state includes:
performing type analysis on the content input to the target client based on the privacy risk input state;
and under the condition that the type of the content of the target client is a preset type, carrying out blurring or character occupation display only or preset identification substitution processing on the content of the target client, then displaying the content, and sending the content before privacy processing to a target receiving end.
Optionally, the method further includes:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to acquire current first facial feature information of a user, wherein the target client is an input client and the facial feature information is used for indicating the expression state of the user;
generating a candidate input expression list based on the current input information of the user and the first facial feature information;
and displaying the candidate input expression list for the user to select.
Optionally, the method further comprises:
under the condition of displaying the candidate input expression list, acquiring current second facial feature information of the user;
determining whether to update the candidate input expression list based on the second facial feature information.
Optionally, the method further comprises:
acquiring candidate input expression sample information comprising historical input information, historical facial feature information and selection information of a user based on a historical candidate input expression list;
training a neural network model through the candidate input expression sample information to obtain the candidate input expression analysis model;
generating a candidate input expression list based on the current input information of the user and the first facial feature information, wherein the candidate input expression list comprises:
and generating a candidate input expression list through the candidate input expression analysis model based on the current input information of the user and the first facial feature information.
Optionally, the candidate input emoticon list includes first user emotion information,
the first user expression information comprises user facial image data to which the first facial feature information belongs and current input information of the user; or the like, or, alternatively,
the first user expression information includes user facial image data to which the first facial feature information belongs and text information corresponding to the first facial feature information.
Optionally, the candidate input emoticon list includes second user expression information,
the second user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal; or the like, or a combination thereof,
the second user expression information comprises user face image data to which the first face feature information belongs, text information corresponding to the first face feature information and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal.
Optionally, the candidate input emoticon list includes third user expression information,
the third user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and second scene image information, and the second scene image information is generated based on the position information of the terminal; or the like, or, alternatively,
the third user expression information includes user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information, and second scene image information generated based on location information of the terminal.
Optionally, the method further includes:
and acquiring message data of the target user of the user in the target client, and generating the third user expression information under the condition that the message data comprises the position inquiry information.
The processor 303 is also configured to perform any of the methods in the corresponding embodiments of fig. 1 by calling the operation instructions stored in the memory 304.
Referring to fig. 4, fig. 4 is a schematic view of an embodiment of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 4, an electronic device 400 according to an embodiment of the present application includes a memory 410, a processor 420, and a computer program 411 stored in the memory 420 and running on the processor 420, where the processor 420 executes the computer program 411 to implement the following steps:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain the current distance between the terminal and the face of a user;
determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user;
and displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before privacy processing to a target receiving end.
Optionally, the user habit distance is the habit distance of the user to which the target client belongs or the habit distance of all the users recorded by the cloud.
Optionally, the privacy processing the content input to the target client based on the privacy risk input state includes:
performing type analysis on the content input to the target client based on the privacy risk input state;
and under the condition that the type of the content of the target client is a preset type, blurring the content of the target client or displaying the character occupation or replacing the preset identification, displaying the content, and sending the content before privacy processing to a target receiving end.
Optionally, the method further includes:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain current first facial feature information of a user, wherein the target client is an input client and the facial feature information is used for indicating the expression state of the user;
generating a candidate input expression list based on the current input information of the user and the first facial feature information;
and displaying the candidate input expression list for the user to select.
Optionally, the method further comprises:
under the condition of displaying the candidate input expression list, acquiring current second facial feature information of the user;
determining whether to update the candidate input emoticon list based on the second facial feature information.
Optionally, the method further comprises:
acquiring candidate input expression sample information comprising historical input information, historical facial feature information and selection information of a user based on a historical candidate input expression list;
training a neural network model through the candidate input expression sample information to obtain the candidate input expression analysis model;
generating a candidate input expression list based on the current input information of the user and the first facial feature information, wherein the candidate input expression list comprises:
and generating a candidate input expression list through the candidate input expression analysis model based on the current input information of the user and the first facial feature information.
Optionally, the candidate input emoticon list includes first user expression information,
the first user expression information comprises user facial image data to which the first facial feature information belongs and current input information of the user; or the like, or, alternatively,
the first user expression information includes user facial image data to which the first facial feature information belongs and text information corresponding to the first facial feature information.
Optionally, the candidate input emoticon list includes second user expression information,
the second user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal; or the like, or, alternatively,
the second user expression information comprises user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal.
Optionally, the candidate input emoticon list includes third user expression information,
the third user expression information comprises user facial image data to which the first facial feature information belongs, current input information of the user and second scene image information, and the second scene image information is generated based on the position information of the terminal; or the like, or, alternatively,
the third user expression information includes user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information, and second scene image information generated based on location information of the terminal.
Optionally, the method further includes:
and acquiring message data of the target user of the user in the target client, and generating the third user expression information under the condition that the message data comprises the position inquiry information.
In a specific implementation, when the processor 420 executes the computer program 411, any of the embodiments corresponding to fig. 1 may be implemented.
Since the electronic device described in this embodiment is a device used for implementing a system resource management apparatus in this embodiment, based on the method described in this embodiment, those skilled in the art can understand a specific implementation of the electronic device in this embodiment and various modifications thereof, so that how to implement the method in this embodiment by the electronic device is not described in detail herein, and as long as the device used for implementing the method in this embodiment by the person skilled in the art belongs to the scope of protection of this application.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an embodiment of a computer-readable storage medium according to the present application.
As shown in fig. 5, the present embodiment provides a computer-readable storage medium 500 having a computer program 511 stored thereon, the computer program 511 implementing the following steps when executed by a processor:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to acquire current first facial feature information of a user, wherein the target client is an input client and the facial feature information is used for indicating the expression state of the user;
generating a candidate input expression list based on the current input information of the user and the first facial feature information;
and displaying the candidate input expression list for the user to select.
In a specific implementation, the computer program 511 may implement any of the embodiments corresponding to fig. 1 when executed by a processor.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
An embodiment of the present application further provides a computer program product, where the computer program product includes computer software instructions, and when the computer software instructions are run on a processing device, the processing device is caused to execute a flow in the communication management method for an application with ultra low power consumption in the internet of things in the embodiment corresponding to fig. 1.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. An Internet of things ultra-low power consumption application communication management method is characterized by comprising the following steps:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain the current distance between the terminal and the face of a user;
determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user;
and based on the privacy risk input state, displaying the content input to the target client after carrying out privacy processing, and sending the content before the privacy processing to a target receiving end.
2. The method according to claim 1, wherein the user habit distance is a habit distance of a user to which the target client belongs or a habit distance of all users recorded in a cloud.
3. The method of claim 1, wherein the privacy processing of the content input to the target client based on the privacy risk input state comprises:
performing type analysis on the content input to the target client based on the privacy risk input state;
and under the condition that the type of the content of the target client is a preset type, carrying out blurring or character occupation display only or preset identification substitution processing on the content of the target client, then displaying the content, and sending the content before privacy processing to a target receiving end.
4. The method of claim 1, further comprising:
under the condition that a target client is detected to be in an information input state, starting a front-facing imaging device of a terminal to which the target client belongs to obtain current first facial feature information of a user, wherein the target client is an input client and the facial feature information is used for indicating the expression state of the user;
generating a candidate input expression list based on the current input information of the user and the first facial feature information;
and displaying the candidate input expression list for the user to select.
5. The method of claim 4, further comprising:
under the condition of displaying the candidate input expression list, acquiring current second facial feature information of the user;
determining whether to update the candidate input emoticon list based on the second facial feature information.
6. The method of claim 4, further comprising:
acquiring candidate input expression sample information comprising historical input information, historical facial feature information and selection information of a user based on a historical candidate input expression list;
training a neural network model through the candidate input expression sample information to obtain the candidate input expression analysis model;
generating a candidate input expression list based on the current input information of the user and the first facial feature information, wherein the candidate input expression list comprises:
and generating a candidate input expression list through the candidate input expression analysis model based on the current input information of the user and the first facial feature information.
7. The method of claim 4, wherein the candidate input emoticon list comprises first user expression information, second user expression information, or third user expression information,
the first user expression information includes user facial image data to which the first facial feature information belongs and current input information of the user, or,
the first user expression information comprises user facial image data to which the first facial feature information belongs and text information corresponding to the first facial feature information;
the second user expression information includes user facial image data to which the first facial feature information belongs, current input information of the user, and first scene image information, which is automatically acquired in real time based on an imaging device of the terminal, or,
the second user expression information comprises user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information and first scene image information, and the first scene image information is automatically acquired in real time based on imaging equipment of the terminal;
the third user profile information includes user facial image data to which the first facial feature information belongs, current input information of the user, and second scene image information generated based on location information of the terminal, or,
the third user expression information includes user facial image data to which the first facial feature information belongs, text information corresponding to the first facial feature information, and second scene image information generated based on location information of the terminal;
the method further comprises the following steps:
and acquiring message data of the target user of the user in the target client, and generating the third user expression information under the condition that the message data comprises the position inquiry information.
8. The utility model provides an thing networking ultra-low power consumption application communication management device which characterized in that includes:
the system comprises a detection unit, a display unit and a control unit, wherein the detection unit is used for starting a front-facing imaging device of a terminal to which a target client belongs to acquire the current distance between the terminal and the face of a user under the condition that the target client is detected to be in an information input state;
the determining unit is used for determining that the current input state is a privacy risk input state under the condition that the current distance exceeds a preset distance range of the habit distance of the user;
and the display unit is used for displaying the content input to the target client after carrying out privacy processing on the content based on the privacy risk input state, and sending the content before the privacy processing to a target receiving end.
9. An electronic device comprising a memory and a processor, wherein the processor is configured to implement the steps of the communication management method for the internet of things ultra-low power application according to any one of claims 1 to 7 when executing a computer program stored in the memory.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, implements the steps of the internet of things ultra-low power application communication management method of any of claims 1 to 7.
CN202210323413.8A 2022-03-30 2022-03-30 Internet of things ultra-low power consumption application communication management method and related equipment Pending CN114743240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210323413.8A CN114743240A (en) 2022-03-30 2022-03-30 Internet of things ultra-low power consumption application communication management method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210323413.8A CN114743240A (en) 2022-03-30 2022-03-30 Internet of things ultra-low power consumption application communication management method and related equipment

Publications (1)

Publication Number Publication Date
CN114743240A true CN114743240A (en) 2022-07-12

Family

ID=82276214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210323413.8A Pending CN114743240A (en) 2022-03-30 2022-03-30 Internet of things ultra-low power consumption application communication management method and related equipment

Country Status (1)

Country Link
CN (1) CN114743240A (en)

Similar Documents

Publication Publication Date Title
US10996741B2 (en) Augmented reality conversation feedback
CN110022399B (en) Message display method and device, user terminal and readable storage medium
CN110598441A (en) User privacy protection method and device
CN112925412A (en) Control method and device of intelligent mirror and storage medium
CN115454561A (en) Customized interface display method, device, equipment and storage medium
CN109657535B (en) Image identification method, target device and cloud platform
CN108108299B (en) User interface testing method and device
CN116229188B (en) Image processing display method, classification model generation method and equipment thereof
CN108737427B (en) Identity display method, device, terminal and storage medium applied to conference room
CN109119131B (en) Physical examination method and system based on medical examination expert intelligence library platform
CN114743240A (en) Internet of things ultra-low power consumption application communication management method and related equipment
CN116092648A (en) Service processing method, device, electronic equipment and computer readable medium
CN112230815B (en) Intelligent help seeking method, device, equipment and storage medium
CN110456920A (en) Semantic analysis-based content recommendation method and device
CN114445894A (en) Storage cabinet management method and device, storage cabinet, electronic equipment and storage medium
CN113486730A (en) Intelligent reminding method based on face recognition and related device
CN113138702A (en) Information processing method, information processing device, electronic equipment and storage medium
CN114386097B (en) User information management method based on cloud architecture and related equipment
CN111756705B (en) Attack testing method, device, equipment and storage medium of in-vivo detection algorithm
JP6518359B1 (en) Credit management and automatic payment system by face recognition technology
EP4083809A1 (en) Method and apparatus for sharing favorite
CN118070807A (en) Data processing method, device, equipment, storage medium and program product
CN113297428A (en) Data storage method and device of body fat detection equipment and electronic equipment
CN113868401A (en) Digital human interaction method and device, electronic equipment and computer storage medium
CN113963687A (en) Voice interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination