CN113220128A - Self-adaptive intelligent interaction method and device and electronic equipment - Google Patents

Self-adaptive intelligent interaction method and device and electronic equipment Download PDF

Info

Publication number
CN113220128A
CN113220128A CN202110584716.0A CN202110584716A CN113220128A CN 113220128 A CN113220128 A CN 113220128A CN 202110584716 A CN202110584716 A CN 202110584716A CN 113220128 A CN113220128 A CN 113220128A
Authority
CN
China
Prior art keywords
information
characteristic information
acquiring
face image
display characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110584716.0A
Other languages
Chinese (zh)
Other versions
CN113220128B (en
Inventor
龙唯浚
林宇光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qihailai Shanghai Artificial Intelligence Technology Co ltd
Original Assignee
Qihailai Shanghai Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qihailai Shanghai Artificial Intelligence Technology Co ltd filed Critical Qihailai Shanghai Artificial Intelligence Technology Co ltd
Priority to CN202110584716.0A priority Critical patent/CN113220128B/en
Publication of CN113220128A publication Critical patent/CN113220128A/en
Application granted granted Critical
Publication of CN113220128B publication Critical patent/CN113220128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

A self-adaptive intelligent interaction method, a device and an electronic device are provided, the method comprises the following steps: acquiring a face image of a user; extracting identity feature information in the face image; acquiring current display characteristic information and optimized display characteristic information of the mobile terminal; judging whether the current display characteristic information is consistent with the identity characteristic information or not; if yes, displaying preset content according to the current display characteristic information; and if not, displaying preset content by using the optimized display characteristic information. According to the self-adaptive intelligent interaction method, the self-adaptive intelligent interaction device and the electronic equipment, the identity characteristic information of the user can be extracted according to the facial image of the user, and the proper display characteristic information is selected according to the identity characteristic information for interaction, so that the self-adaptive intelligent interaction method, the self-adaptive intelligent interaction device and the electronic equipment are suitable for different user groups.

Description

Self-adaptive intelligent interaction method and device and electronic equipment
Technical Field
The invention belongs to the technical field of human-computer interaction, and particularly relates to a self-adaptive intelligent interaction method and device and electronic equipment.
Background
The existing mobile terminal has realized a human-computer interaction function, such as for people to perform text interaction, voice interaction or picture interaction, but generally, on one mobile terminal, the interaction mode is fixed, and the mobile terminal sometimes needs to face a plurality of users, and the fixed interaction mode cannot flexibly adapt to the plurality of users.
Disclosure of Invention
In order to solve the above problems, the present invention provides a self-adaptive intelligent interaction method, which comprises the steps of:
acquiring a face image of a user;
extracting identity feature information in the face image;
acquiring current display characteristic information and optimized display characteristic information of the mobile terminal;
judging whether the current display characteristic information is consistent with the identity characteristic information or not;
if yes, displaying preset content according to the current display characteristic information;
and if not, displaying preset content by using the optimized display characteristic information.
Preferably, the acquiring of the face image of the user comprises the steps of:
judging whether a human body exists in a preset range around the mobile terminal;
if so, acquiring the human body position information;
if not, keeping the current state of the mobile terminal;
acquiring the current position and the view angle range of a lens on the mobile terminal;
calculating a deviation angle between the human body position information and the current position according to the human body position information and the current position;
judging whether the offset angle is within the visual angle range;
if yes, acquiring the human face image through the lens;
if not, shifting the preset angle of the lens towards the human body, and returning to the step of judging whether the shift angle is within the visual angle range.
Preferably, the determining whether a human body exists in a preset range around the mobile terminal includes:
the mobile terminal transmits a life existence detection signal to a surrounding preset range;
acquiring a reflected signal corresponding to the life existence detection signal;
judging whether life exists according to the reflected signal;
if yes, extracting vital sign information contained in the reflected signal;
if not, returning to the step that the mobile terminal transmits the life existence detection signal to the surrounding preset range;
judging whether the vital sign information is larger than a preset value or not;
if yes, judging that a human body exists;
if not, judging that no human body exists.
Preferably, the determining whether life exists according to the reflected signal includes:
acquiring a first reflection signal corresponding to first time;
judging whether the intensity of the first reflection signal exceeds a preset value;
if so, acquiring a second reflection signal corresponding to a second time;
if not, judging that no life exists;
judging whether the intensity of the second reflection signal exceeds a preset value;
if yes, judging that life exists;
if not, judging that no life exists.
Preferably, the extracting vital sign information contained in the reflected signal includes one or more of:
extracting life volume information contained in the reflected signal;
extracting life breathing information contained in the reflected signal;
and extracting life heartbeat information contained in the reflected signal.
Preferably, the extracting the identity feature information in the face image comprises the steps of:
acquiring a standard face image;
acquiring age grouping data corresponding to the standard face image;
comparing the facial image with the standard facial image;
calculating a similarity between the face image and the standard face image;
and calculating age information corresponding to the face image according to the similarity and the age grouping data.
Preferably, the calculating age information corresponding to the face image based on the similarity and the age grouping data includes the steps of:
sorting all the age grouping data in a descending order;
acquiring a first similarity at a first shooting angle and a second similarity at a second shooting angle;
calculating an average of the first similarity and the second similarity;
selecting a corresponding age grouping interval from the age grouping data according to the average value;
calculating the middle value of the interval;
and taking the intermediate value as the age information.
The invention also provides a self-adaptive intelligent interaction device, which comprises:
the image acquisition module is used for acquiring a face image of a user;
the information extraction module is used for extracting the identity characteristic information in the face image;
the information acquisition module is used for acquiring the current display characteristic information and the optimized display characteristic information of the mobile terminal;
the judging module is used for judging whether the current display characteristic information is consistent with the identity characteristic information or not;
the execution module is used for executing preset operation according to the judgment result of the judgment module;
when the judgment result of the judgment module is yes, the execution module displays preset content according to the current display characteristic information; and when the judgment result of the judgment module is negative, the execution module displays preset content by using the optimized display characteristic information.
The present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the adaptive intelligent interaction methods described above.
The present invention also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform any of the adaptive intelligent interaction methods described above.
According to the self-adaptive intelligent interaction method, the self-adaptive intelligent interaction device and the electronic equipment, the identity characteristic information of the user can be extracted according to the facial image of the user, and the proper display characteristic information is selected according to the identity characteristic information for interaction, so that the self-adaptive intelligent interaction method, the self-adaptive intelligent interaction device and the electronic equipment are suitable for different user groups.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating an adaptive intelligent interaction method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an adaptive intelligent interaction device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a non-transitory computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Referring to fig. 1, in an embodiment of the present application, the present invention provides an adaptive intelligent interaction method, where the method includes:
s1: acquiring a face image of a user;
in the embodiment of the present application, the acquiring of the face image of the user in step S1 includes the steps of:
judging whether a human body exists in a preset range around the mobile terminal;
if so, acquiring the human body position information;
if not, keeping the current state of the mobile terminal;
acquiring the current position and the view angle range of a lens on the mobile terminal;
calculating a deviation angle between the human body position information and the current position according to the human body position information and the current position;
judging whether the offset angle is within the visual angle range;
if yes, acquiring the human face image through the lens;
if not, shifting the preset angle of the lens towards the human body, and returning to the step of judging whether the shift angle is within the visual angle range.
In the embodiment of the application, when the facial image of the user is acquired, specifically, whether a human body exists in a preset range around the mobile terminal is judged firstly; if the human body exists, acquiring the position information of the human body; if no human body exists, the current state of the mobile terminal is kept; then, acquiring the current position and the view angle range of a lens on the mobile terminal, and calculating a deviation angle between the current position and the current position according to the human body position information, wherein the deviation angle (included angle) can be calculated through mathematical knowledge and is not repeated herein; then judging whether the offset angle is within the range of the visual angle, namely whether the offset angle is smaller than or equal to the visual angle, and if so, acquiring the human face image through the lens; if not, the lens is deviated towards the human body by a preset angle, the preset angle can be set according to requirements as long as the lens is closer to or directly faces the human body, and then the step of judging whether the deviation angle is within the visual angle range is returned.
In this embodiment of the present application, the determining whether a human body exists in a preset range around the mobile terminal includes:
the mobile terminal transmits a life existence detection signal to a surrounding preset range;
acquiring a reflected signal corresponding to the life existence detection signal;
judging whether life exists according to the reflected signal;
if yes, extracting vital sign information contained in the reflected signal;
if not, returning to the step that the mobile terminal transmits the life existence detection signal to the surrounding preset range;
judging whether the vital sign information is larger than a preset value or not;
if yes, judging that a human body exists;
if not, judging that no human body exists.
In the embodiment of the application, when determining whether a human body exists in a preset range around the mobile terminal, specifically, the mobile terminal transmits a life existence detection signal, such as a heartbeat/respiration detection signal, to the preset range around the mobile terminal, then obtains a reflection signal corresponding to the life existence detection signal, and determines whether a life exists according to the reflection signal; if the life exists, extracting the vital sign information contained in the reflected signal; if the life does not exist, returning to the step that the mobile terminal transmits the life existence detection signal to the surrounding preset range, namely, the mobile terminal continues to transmit the life existence detection signal to the surrounding preset range; then judging whether the vital sign information is larger than a preset value, namely judging whether the vital sign information contained in the reflected signal is larger than the preset value, and if so, judging that a human body exists; if the value is less than the preset value, judging that no human body exists. In this way, it is possible to avoid interference of animal life with heart beat/respiration, for example, the respiratory/heart beat frequency of human body is larger than the respiratory/heart beat frequency of animal.
In this embodiment of the present application, the determining whether there is a life according to the reflected signal includes:
acquiring a first reflection signal corresponding to first time;
judging whether the intensity of the first reflection signal exceeds a preset value;
if so, acquiring a second reflection signal corresponding to a second time;
if not, judging that no life exists;
judging whether the intensity of the second reflection signal exceeds a preset value;
if yes, judging that life exists;
if not, judging that no life exists.
In the embodiment of the application, when determining whether a life exists according to the reflection signal, specifically, first, obtaining a first reflection signal corresponding to a first time, and determining whether the intensity of the first reflection signal exceeds a preset value; if yes, then acquiring a second reflection signal corresponding to a second time; if not, judging that no life exists; then judging whether the intensity of the second reflection signal exceeds a preset value; if the number of the detected life exceeds the preset number, judging that the life exists; if not, judging that no life exists. Whether the intensity of the reflected signals of two successive times is greater than a preset value or not is judged, so that the judgment accuracy is improved, and further, the error caused by a single judgment result is reduced.
In an embodiment of the present application, the extracting vital sign information included in the reflected signal includes one or more of the following:
extracting life volume information contained in the reflected signal;
extracting life breathing information contained in the reflected signal;
and extracting life heartbeat information contained in the reflected signal.
In the embodiment of the present application, when extracting the vital sign information included in the reflected signal, any one or more of the following information may be extracted as needed, for example: vital volume information, vital respiration information, vital heartbeat information. Specifically, the vital volume information may be volume information of a living body, which may be represented by a living body image area; the life breathing information can be breathing information of a living body and can be represented by the breathing frequency of the living body; the vital heartbeat information may be heartbeat information of a living body, and may be represented by a heartbeat frequency of the living body.
S2: extracting identity feature information in the face image;
in the embodiment of the present application, when extracting the identity feature information in the face image, various information in the face image, such as iris information, skin information, hair information, and the like, may be extracted as needed.
In the embodiment of the present application, the extracting of the identity feature information in the face image in step S2 includes the steps of:
acquiring a standard face image;
acquiring age grouping data corresponding to the standard face image;
comparing the facial image with the standard facial image;
calculating a similarity between the face image and the standard face image;
and calculating age information corresponding to the face image according to the similarity and the age grouping data.
In the embodiment of the application, when the identity feature information in the face image is extracted, specifically, a standard face image is obtained first, each age group corresponds to one standard face image, and the standard face images can be obtained by collecting big data and then summarizing the big data; then acquiring age grouping data corresponding to the standard facial images, namely the age group corresponding to each standard facial image, such as the age group of the standard facial image A is 10-15 years old, the age group of the standard facial image B is 15-20 years old, and the like; then comparing the face image with the standard face image, and calculating the similarity between the face image and the standard face image; and then calculating age information corresponding to the face image according to the similarity and the age grouping data.
In the embodiment of the present application, said calculating age information corresponding to said face image based on said similarity and said age grouping data comprises the steps of:
sorting all the age grouping data in a descending order;
acquiring a first similarity at a first shooting angle and a second similarity at a second shooting angle;
calculating an average of the first similarity and the second similarity;
selecting a corresponding age grouping interval from the age grouping data according to the average value;
calculating the middle value of the interval;
and taking the intermediate value as the age information.
In the embodiment of the present application, when calculating age information corresponding to the face image according to the similarity and the age group data, specifically, first sorting all the age group data in a descending order, and then obtaining a first similarity at a first shooting angle and a second similarity at a second shooting angle, that is, shooting the face of the user using at least two shooting angles, and obtaining the two similarities; then calculating the average value of the first similarity and the second similarity, selecting the corresponding age grouping interval in the age grouping data according to the average value, namely selecting the corresponding age grouping interval according to the similarity, calculating the middle value of the interval, and taking the middle value as the age information.
S3: acquiring current display characteristic information and optimized display characteristic information of the mobile terminal;
in the embodiment of the present application, the current display characteristic information may be a current display mode of the mobile terminal, such as a font size, a screen brightness, and the like, and the optimized display characteristic information is a better display mode, such as a larger font, a brighter screen brightness, and the like. The optimized display characteristic information can be selected according to needs and is stored in the mobile terminal in advance.
S4: judging whether the current display characteristic information is consistent with the identity characteristic information or not;
in the embodiment of the application, whether the current display characteristic information and the identity characteristic information belong to the corresponding relationship is compared, and the corresponding relationship can be stored in a mapping form in advance.
S5: if yes, displaying preset content according to the current display characteristic information;
s6: and if not, displaying preset content by using the optimized display characteristic information.
In the embodiment of the application, when the current display characteristic information is judged to be consistent with the identity characteristic information, displaying the preset content by the current display characteristic information; and when the two are judged not to be consistent, displaying the preset content by the optimized display characteristic information.
As shown in fig. 2, in the embodiment of the present application, the present invention further provides an adaptive intelligent interaction apparatus, where the apparatus includes:
an image acquisition module 10 for acquiring a face image of a user;
an information extraction module 20, configured to extract identity feature information in the face image;
an information obtaining module 30, configured to obtain current display characteristic information and optimized display characteristic information of the mobile terminal;
a judging module 40, configured to judge whether the current display characteristic information matches the identity characteristic information;
an executing module 50, configured to execute a preset operation according to the determination result of the determining module 40;
when the judgment result of the judgment module 40 is yes, the execution module 50 displays preset content according to the current display characteristic information; when the judgment result of the judgment module 40 is negative, the execution module 50 displays preset content according to the optimized display characteristic information.
In the embodiment of the present application, the adaptive intelligent interaction apparatus provided by the present application may execute the above-mentioned adaptive intelligent interaction method.
Referring to fig. 3, an embodiment of the present disclosure also provides an electronic device 100, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the adaptive intelligent interaction method of the method embodiments described above.
The disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the adaptive intelligent interaction method of the aforementioned method embodiments.
According to the self-adaptive intelligent interaction method, the self-adaptive intelligent interaction device and the electronic equipment, the identity characteristic information of the user can be extracted according to the facial image of the user, and the proper display characteristic information is selected according to the identity characteristic information for interaction, so that the self-adaptive intelligent interaction method, the self-adaptive intelligent interaction device and the electronic equipment are suitable for different user groups.
Referring now to FIG. 3, a block diagram of an electronic device 100 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 100 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 101 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)102 or a program loaded from a storage means 108 into a Random Access Memory (RAM) 103. In the RAM 103, various programs and data necessary for the operation of the electronic apparatus 100 are also stored. The processing device 101, the ROM 102, and the RAM 103 are connected to each other via a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
Generally, the following devices may be connected to the I/O interface 105: input devices 106 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 107 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 108 including, for example, magnetic tape, hard disk, etc.; and a communication device 109. The communication means 109 may allow the electronic device 100 to communicate wirelessly or by wire with other devices to exchange data. While the figures illustrate an electronic device 100 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 109, or installed from the storage means 108, or installed from the ROM 102. The computer program, when executed by the processing device 101, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
Reference is now made to fig. 4, which shows a schematic structural diagram of a computer-readable storage medium suitable for implementing an embodiment of the present disclosure, the computer-readable storage medium storing a computer program, which when executed by a processor is capable of implementing the adaptive intelligent interaction method as described in any one of the above.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In short, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An adaptive intelligent interaction method, characterized in that the method comprises the steps of:
acquiring a face image of a user;
extracting identity feature information in the face image;
acquiring current display characteristic information and optimized display characteristic information of the mobile terminal;
judging whether the current display characteristic information is consistent with the identity characteristic information or not;
if yes, displaying preset content according to the current display characteristic information;
and if not, displaying preset content by using the optimized display characteristic information.
2. The adaptive intelligent interaction method of claim 1, wherein the obtaining of the facial image of the user comprises the steps of:
judging whether a human body exists in a preset range around the mobile terminal;
if so, acquiring the human body position information;
if not, keeping the current state of the mobile terminal;
acquiring the current position and the view angle range of a lens on the mobile terminal;
calculating a deviation angle between the human body position information and the current position according to the human body position information and the current position;
judging whether the offset angle is within the visual angle range;
if yes, acquiring the human face image through the lens;
if not, shifting the preset angle of the lens towards the human body, and returning to the step of judging whether the shift angle is within the visual angle range.
3. The adaptive intelligent interaction method according to claim 2, wherein the step of judging whether a human body exists in a preset range around the mobile terminal comprises the steps of:
the mobile terminal transmits a life existence detection signal to a surrounding preset range;
acquiring a reflected signal corresponding to the life existence detection signal;
judging whether life exists according to the reflected signal;
if yes, extracting vital sign information contained in the reflected signal;
if not, returning to the step that the mobile terminal transmits the life existence detection signal to the surrounding preset range;
judging whether the vital sign information is larger than a preset value or not;
if yes, judging that a human body exists;
if not, judging that no human body exists.
4. The adaptive intelligent interactive method according to claim 3, wherein the step of determining whether a life exists according to the reflected signal comprises the steps of:
acquiring a first reflection signal corresponding to first time;
judging whether the intensity of the first reflection signal exceeds a preset value;
if so, acquiring a second reflection signal corresponding to a second time;
if not, judging that no life exists;
judging whether the intensity of the second reflection signal exceeds a preset value;
if yes, judging that life exists;
if not, judging that no life exists.
5. The adaptive intelligent interaction method of claim 3, wherein the extracting vital sign information contained in the reflected signal comprises one or more of:
extracting life volume information contained in the reflected signal;
extracting life breathing information contained in the reflected signal;
and extracting life heartbeat information contained in the reflected signal.
6. The adaptive intelligent interaction method of claim 1, wherein the extracting of the identity feature information in the facial image comprises the steps of:
acquiring a standard face image;
acquiring age grouping data corresponding to the standard face image;
comparing the facial image with the standard facial image;
calculating a similarity between the face image and the standard face image;
and calculating age information corresponding to the face image according to the similarity and the age grouping data.
7. The adaptive intelligent interactive method according to claim 6, wherein the calculating age information corresponding to the facial image according to the similarity and the age grouping data comprises the steps of:
sorting all the age grouping data in a descending order;
acquiring a first similarity at a first shooting angle and a second similarity at a second shooting angle;
calculating an average of the first similarity and the second similarity;
selecting a corresponding age grouping interval from the age grouping data according to the average value;
calculating the middle value of the interval;
and taking the intermediate value as the age information.
8. An adaptive intelligent interaction device, the device comprising:
the image acquisition module is used for acquiring a face image of a user;
the information extraction module is used for extracting the identity characteristic information in the face image;
the information acquisition module is used for acquiring the current display characteristic information and the optimized display characteristic information of the mobile terminal;
the judging module is used for judging whether the current display characteristic information is consistent with the identity characteristic information or not;
the execution module is used for executing preset operation according to the judgment result of the judgment module;
when the judgment result of the judgment module is yes, the execution module displays preset content according to the current display characteristic information; and when the judgment result of the judgment module is negative, the execution module displays preset content by using the optimized display characteristic information.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the adaptive intelligent interaction method of any of the preceding claims 1-7.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the adaptive intelligent interaction method of any one of claims 1-7.
CN202110584716.0A 2021-05-27 2021-05-27 Self-adaptive intelligent interaction method and device and electronic equipment Active CN113220128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110584716.0A CN113220128B (en) 2021-05-27 2021-05-27 Self-adaptive intelligent interaction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110584716.0A CN113220128B (en) 2021-05-27 2021-05-27 Self-adaptive intelligent interaction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113220128A true CN113220128A (en) 2021-08-06
CN113220128B CN113220128B (en) 2022-11-04

Family

ID=77098775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110584716.0A Active CN113220128B (en) 2021-05-27 2021-05-27 Self-adaptive intelligent interaction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113220128B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491302A (en) * 2013-09-18 2014-01-01 潍坊歌尔电子有限公司 System and method for adjusting and controlling camera of smart television
CN105718887A (en) * 2016-01-21 2016-06-29 惠州Tcl移动通信有限公司 Shooting method and shooting system capable of realizing dynamic capturing of human faces based on mobile terminal
CN106469298A (en) * 2016-08-31 2017-03-01 乐视控股(北京)有限公司 Age recognition methodss based on facial image and device
CN106548045A (en) * 2016-09-26 2017-03-29 惠州Tcl移动通信有限公司 It is a kind of based on the application program method for down loading at age, system and electronic equipment
CN106625711A (en) * 2016-12-30 2017-05-10 华南智能机器人创新研究院 Method for positioning intelligent interaction of robot
US20170188093A1 (en) * 2015-12-28 2017-06-29 Le Holdings (Beijing) Co., Ltd. Method and electronic device for grading-based program playing based on face recognition
CN108734146A (en) * 2018-05-28 2018-11-02 北京达佳互联信息技术有限公司 Facial image Age estimation method, apparatus, computer equipment and storage medium
WO2020192222A1 (en) * 2019-03-26 2020-10-01 深圳创维-Rgb电子有限公司 Method and device for intelligent analysis of user context and storage medium
CN112364808A (en) * 2020-11-24 2021-02-12 哈尔滨工业大学 Living body identity authentication method based on FMCW radar and face tracking identification
CN112822550A (en) * 2021-01-12 2021-05-18 深圳创维-Rgb电子有限公司 Television terminal adjusting method and device and television terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491302A (en) * 2013-09-18 2014-01-01 潍坊歌尔电子有限公司 System and method for adjusting and controlling camera of smart television
US20170188093A1 (en) * 2015-12-28 2017-06-29 Le Holdings (Beijing) Co., Ltd. Method and electronic device for grading-based program playing based on face recognition
CN105718887A (en) * 2016-01-21 2016-06-29 惠州Tcl移动通信有限公司 Shooting method and shooting system capable of realizing dynamic capturing of human faces based on mobile terminal
CN106469298A (en) * 2016-08-31 2017-03-01 乐视控股(北京)有限公司 Age recognition methodss based on facial image and device
CN106548045A (en) * 2016-09-26 2017-03-29 惠州Tcl移动通信有限公司 It is a kind of based on the application program method for down loading at age, system and electronic equipment
CN106625711A (en) * 2016-12-30 2017-05-10 华南智能机器人创新研究院 Method for positioning intelligent interaction of robot
CN108734146A (en) * 2018-05-28 2018-11-02 北京达佳互联信息技术有限公司 Facial image Age estimation method, apparatus, computer equipment and storage medium
WO2020192222A1 (en) * 2019-03-26 2020-10-01 深圳创维-Rgb电子有限公司 Method and device for intelligent analysis of user context and storage medium
CN112364808A (en) * 2020-11-24 2021-02-12 哈尔滨工业大学 Living body identity authentication method based on FMCW radar and face tracking identification
CN112822550A (en) * 2021-01-12 2021-05-18 深圳创维-Rgb电子有限公司 Television terminal adjusting method and device and television terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗佳佳等: "一种基于人脸图像的年龄估计方法", 《计算机与数字工程》 *

Also Published As

Publication number Publication date
CN113220128B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN111767371B (en) Intelligent question-answering method, device, equipment and medium
EP3206110B1 (en) Method of providing handwriting style correction function and electronic device adapted thereto
CN110287810B (en) Vehicle door motion detection method, device and computer readable storage medium
CN110674349B (en) Video POI (Point of interest) identification method and device and electronic equipment
CN111738316B (en) Zero sample learning image classification method and device and electronic equipment
CN111582090A (en) Face recognition method and device and electronic equipment
CN111813641B (en) Method, device, medium and equipment for collecting crash information
CN110288553A (en) Image beautification method, device and electronic equipment
CN113420159A (en) Target customer intelligent identification method and device and electronic equipment
CN110287350A (en) Image search method, device and electronic equipment
CN111626990B (en) Target detection frame processing method and device and electronic equipment
US10740423B2 (en) Visual data associated with a query
CN110908860B (en) Java thread acquisition method and device, medium and electronic equipment
CN110264430B (en) Video beautifying method and device and electronic equipment
CN110147283B (en) Display content switching display method, device, equipment and medium
CN113220128B (en) Self-adaptive intelligent interaction method and device and electronic equipment
CN107743151B (en) Content pushing method and device, mobile terminal and server
CN111462548A (en) Paragraph point reading method, device, equipment and readable medium
CN112036519B (en) Multi-bit sigmoid-based classification processing method and device and electronic equipment
CN112315463B (en) Infant hearing test method and device and electronic equipment
CN111738311A (en) Multitask-oriented feature extraction method and device and electronic equipment
CN113807145A (en) Face recognition method and device
CN110929241A (en) Rapid start method, device, medium and electronic equipment of small program
CN110969189B (en) Face detection method and device and electronic equipment
CN110390291B (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant