CN110955348A - Intelligent system for assisting user and medium applied to intelligent system - Google Patents

Intelligent system for assisting user and medium applied to intelligent system Download PDF

Info

Publication number
CN110955348A
CN110955348A CN201811124734.5A CN201811124734A CN110955348A CN 110955348 A CN110955348 A CN 110955348A CN 201811124734 A CN201811124734 A CN 201811124734A CN 110955348 A CN110955348 A CN 110955348A
Authority
CN
China
Prior art keywords
touch
user
visual analysis
area
wearable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811124734.5A
Other languages
Chinese (zh)
Inventor
蔡海蛟
冯歆鹏
周骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Zhaoguan Electronic Technology Co ltd
NextVPU Shanghai Co Ltd
Original Assignee
Kunshan Zhaoguan Electronic Technology Co ltd
NextVPU Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Zhaoguan Electronic Technology Co ltd, NextVPU Shanghai Co Ltd filed Critical Kunshan Zhaoguan Electronic Technology Co ltd
Priority to CN201811124734.5A priority Critical patent/CN110955348A/en
Publication of CN110955348A publication Critical patent/CN110955348A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an intelligent system for assisting a user and a medium applied to the intelligent system. The intelligent system at least comprises a touch control device and a wearable intelligent device which is in communication connection with the touch control device; the touch control equipment is used for receiving a touch instruction of a user and interacting with the wearable intelligent equipment according to the touch instruction; and the wearable intelligent device is used for acquiring the image in the view area, determining the position appointed by the touch instruction in the image through the interaction, and acquiring and broadcasting the visual analysis information aiming at the appointed position. According to the scheme, the user can conveniently designate the concerned position in the image in the view area of the wearable intelligent device by touching the touch control device, so that the wearable intelligent device is indicated to broadcast the visual analysis information aiming at the position to the user in a targeted manner.

Description

Intelligent system for assisting user and medium applied to intelligent system
Technical Field
The present application relates to the field of communications technologies, and in particular, to an intelligent system for assisting a user and a medium applied to the intelligent system.
Background
The related data show that the current Chinese visually impaired people reach 1300 thousands of people, but because the infrastructure of the current domestic visually impaired people is not perfect enough, the visually impaired people have a plurality of unsafe factors when going out.
The intelligent glasses are a new wearable intelligent device with the head, have an independent operating system like a smart phone, can be installed by a user, can realize functions such as adding schedules, map navigation, interacting with friends, taking photos and videos, and developing video calls with friends through control, and can realize wireless network access through a mobile communication network.
For the visually impaired, an important function of the intelligent glasses is auxiliary vision, the visually impaired wears the intelligent glasses, the intelligent glasses collect images in a visual field in real time and analyze the whole image to obtain visual analysis information of the whole image, for example, objects exist in the front and influence the user to pass or not, and the intelligent glasses can broadcast the visual analysis information of the whole image to the user so that the user can smoothly advance.
However, since a certain time is required for the broadcasting process of the visual analysis information of the whole image, the user always needs to wait passively, and the convenience is poor.
Disclosure of Invention
The embodiment of the application provides an intelligent system for assisting a user and a medium applied to the intelligent system, and aims to solve the following technical problems in the prior art: the current intelligent glasses need a certain time to the broadcast process of the visual analysis information of whole image, and the user often also waits passively, and the convenience is relatively poor.
The embodiment of the application adopts the following technical scheme:
an intelligent system for assisting a user comprises a touch control device and a wearable intelligent device which is in communication connection with the touch control device;
the touch control equipment is used for receiving a touch instruction of a user and interacting with the wearable intelligent equipment according to the touch instruction;
the wearable intelligent device is used for acquiring images in a view area, determining the position appointed by the touch instruction in the images through the interaction, and acquiring and broadcasting the visual analysis information aiming at the appointed position.
Optionally, the wearable smart device comprises smart glasses, the viewing area comprising at least a partial field of view of the smart glasses.
Optionally, the touch control device includes a touch panel, and the touch instruction is issued by a user touching a touch area of the touch panel;
the viewing area and the touch area have a mapping relation, and the specified position comprises: and when the user issues the touch instruction, the user touches and selects a part of the view finding area mapped by the part of the touch area.
Optionally, the partial touch area and the partial view area are respectively one point or a plurality of discrete points.
Optionally, the view area is divided into a plurality of first sub-areas of a certain number, the touch area is also correspondingly divided into a plurality of second sub-areas of the same number, and a one-to-one mapping relationship is provided between each first sub-area and each second sub-area;
the partial touch area selected by touch includes: a second sub-area on which a point touched on the touch panel is located, or a second sub-area defined by the touch on the touch panel.
Optionally, the touch control device interacts with the wearable smart device according to the touch instruction, and specifically includes:
the touch control equipment determines touch position information according to the touch instruction and sends the touch position information to the wearable intelligent equipment;
and the wearable intelligent equipment determines the position specified by the touch instruction in the image according to the mapping relation and the received touch position information.
Optionally, the acquiring, by the wearable smart device, visual analysis information for the specified location specifically includes:
the wearable intelligent equipment performs visual analysis on the appointed position by using an image processing algorithm to obtain visual analysis information aiming at the appointed position; alternatively, the first and second electrodes may be,
and the wearable intelligent equipment performs visual analysis on all areas in the image by using an image processing algorithm to obtain visual analysis information aiming at all areas, and screens out the visual analysis information aiming at the specified position.
Optionally, the visual analysis information for the specified location includes: and acquiring related information of the object existing in the specified position through visual analysis, wherein the related information comprises obstacle information and/or identification object information, the obstacle information comprises distance and/or direction, and the identification object information comprises at least one of characters, object types and attributes.
Optionally, the wearable smart device broadcasts visual analysis information for the specified location, specifically including:
and the wearable intelligent equipment broadcasts visual analysis information aiming at the appointed position and matched with the touch duration and/or the touch strength according to the touch duration and/or the touch strength corresponding to the touch instruction.
Optionally, the wearable smart device reports the relatively detailed visual analysis information when the touch duration and/or the touch strength are/is greater than a certain threshold, and otherwise reports the relatively brief visual analysis information.
Optionally, the wearable smart device comprises a bone conduction headset for broadcasting visual analysis information for the specified location to a user.
Optionally, the communication connection is a power-suppliable wired connection or a short-range wireless communication connection;
if the communication connection is a power-suppliable wired connection, the touch control device includes a battery for supplying power to the wearable smart device through the power-suppliable wired connection.
A non-transitory computer storage medium for application to a smart system for assisting a user, the smart system comprising a touch-controlled device, a wearable smart device having a communication connection with the touch-controlled device, and computer-executable instructions configured to:
enabling the touch control equipment to receive a touch instruction of a user, and interacting with the wearable intelligent equipment according to the touch instruction;
and enabling the wearable intelligent equipment to acquire images in a viewing area, determining the position appointed by the touch instruction in the images through the interaction, and acquiring and broadcasting the visual analysis information aiming at the appointed position.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: the user can conveniently and actively designate the concerned position in the image in the view area of the wearable intelligent device by touching the touch control device, so that the wearable intelligent device is indicated to broadcast the visual analysis information aiming at the position to the user in a targeted manner, the user is prevented from waiting for a long time, the convenience is better, and the user can be helped to feel the position more intuitively.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of an intelligent system for assisting a user according to some embodiments of the present application;
fig. 2 is a schematic diagram illustrating a user performing a touch operation through a touch control device in an actual application scenario according to some embodiments of the present application;
FIG. 3 is an exemplary detailed structural schematic diagram of the intelligent system of FIG. 1 provided in some embodiments of the present application;
FIG. 4 is an exemplary workflow diagram of the intelligence system of FIG. 3 provided by some embodiments of the present application;
fig. 5 is another exemplary workflow diagram of the intelligent system of fig. 3 provided by some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical scheme of the application particularly provides the touch control device, and a user can conveniently and actively control the current interactive focus in the field of view of the intelligent glasses by using the touch control device, so that the user is helped to efficiently acquire corresponding visual analysis information, and not passively receive all visual analysis information of a current image. The scheme of the present application is explained in detail below.
Fig. 1 is a schematic structural diagram of an intelligent system for assisting a user according to some embodiments of the present application. In fig. 1, the smart system includes a touch control device 11, a wearable smart device 12 (reference numerals are omitted in some embodiments for simplicity), and the touch control device 11 and the wearable smart device 12 have a communication connection therebetween, where the connection may be a wired connection, such as a wired connection based on a specific entity Interface, such as Universal Serial Bus (USB), High Definition Multimedia Interface (HDMI), displayport (dp), or a wireless connection, such as a short-distance wireless communication connection, such as bluetooth, wireless fidelity (WiFi). The touch control device 11 may be configured to receive a touch instruction of a user, and interact with the wearable smart device 12 according to the touch instruction; the wearable smart device 12 may be configured to acquire an image in a viewing area, determine, through the interaction, a location specified in the image by the touch instruction, and acquire and broadcast visual analysis information for the specified location.
The touch control device may be an external device mainly used for assisting another device to use, such as a single touch pad, a touch track ball, a touch control lever, a touch pad mouse, and the like, or may also be a main device, such as a mobile phone with a touch screen or a tablet computer.
The wearable smart device may be smart glasses, or other wearable smart devices capable of capturing images, such as a smart camera device worn on the top of the head or hung on the neck.
Wearable smart machine not only can gather the image, can also be local carry out visual analysis to the image further, perhaps can give other equipment such as remote server and carry out visual analysis to obtain corresponding visual analysis information, report to the user again. The visual analysis information may include information about objects actually present in the image (for visually impaired users, the objects include obstacles in particular), such as orientation of an item, distance (how far away from the user currently), dynamic (whether stationary or moving, whether moving towards the user, whether actively avoiding the user, etc.), etc., and may also include recognized content information (which may be referred to as recognized object information), such as text, object type, attributes, graphical features, etc., or Virtual content generated by techniques such as Augmented Reality (AR), Virtual Reality (VR), etc. In addition, the wearable smart device may also have more functions, such as real-time navigation, physiological data monitoring, distress call, game, multimedia playing, and the like.
In some embodiments of the present application, a user may place the touch control device in a place where the user can conveniently reach his/her hand, so as to perform a touch operation on the touch control device at any time, for example, directly holding the touch control device by hand or placing the touch control device in a pocket of clothing, etc. The touch control device is provided with at least one touch area, a user can touch in the touch area by using a hand or a touch tool such as a touch pen, so as to realize touch operation, a touch instruction is issued through corresponding touch operation, and touch actions comprise clicking, continuous pressing, continuous track sliding, continuous contact, driving of a touch track ball or a touch control rod to change directions and the like.
The touch command can specify a position in the image in the viewing area, where the specified position corresponds to a touch action performed by the user issuing the touch command, that is, what position is actually specified is determined by the user's touch. The position may be a single point, or a plurality of discrete points, or at least one sub-area, etc., which may depend on a predetermined rule.
Through the intelligent system in fig. 1, a user can actively and conveniently designate a position concerned by the user in an image in a viewing area of the wearable intelligent device by touching the touch control device to indicate the wearable intelligent device to broadcast visual analysis information aiming at the position to the user in a targeted manner, so that the user is prevented from waiting for a long time, and the convenience is good; moreover, because the position is appointed by the user, the position can be intuitively felt by the user, the user can feel the position clearly, the position corresponding to the currently broadcasted visual analysis information is informed, the reaction time of the user is effectively shortened, and the user experience is better.
Based on the intelligent system of fig. 1, some embodiments of the present application also provide some specific embodiments of the intelligent system, and further embodiments, which are described below.
In some embodiments of the application, for practical needs such as wearing convenience and finding a view, smart glasses as a typical wearable smart device can better satisfy these needs, and some embodiments below describe with this case as an example.
The viewing area may include at least a partial field of view of the smart eyewear, similar to the field of view of a user's eyes with normal vision, as may be appreciated. The functional module for finding a view on the intelligent glasses can comprise one or more cameras, wherein when the intelligent glasses comprise more than two cameras, the visual field of a user can be simulated more truly, three-dimensional images can be acquired and processed, and then richer visual analysis information such as space distance and three-dimensional coordinates can be obtained through visual analysis. The camera can adopt a common camera, or in order to realize some special functions, a special camera such as a fisheye camera (enabling the visual field to be wider), an infrared camera (enabling the visual field to be visible at night) and the like can also be adopted. The camera is disposed on a lens or a frame of the smart glasses, for example.
In order to realize intellectualization, the intelligent glasses may further include some conventional or non-conventional software and hardware modules, such as an embedded operating system, an intelligent application, a central processing unit, an image processor, a memory, an antenna, a bluetooth module, a WiFi module, a battery, a peripheral interface, and physical keys.
In some embodiments of the present application, for practical needs such as intuitive positioning required by a visually impaired user, a touch panel as a typical touch control device can better meet the needs, and the following embodiments take such cases as examples for explanation.
The touch pad is provided with at least one touch area, and a user issues a corresponding touch instruction by touching in the touch area. The view area (or, the view area may be an area in an image captured through the view area; because, in a scene with high real-time performance, the area in the image and the view area may be regarded as being identical) has a mapping relationship with the touch area of the touch panel, the specified position may include: and when the user issues a touch instruction, the user touches and selects a part of the view finding area mapped by the part of the touch area.
The above mapping helps to ensure that the user knows more clearly: when the user wants to listen to the visual analysis information corresponding to a certain position in the image, the user should touch which point or which sub-area in the touch area. Furthermore, the touch control method is beneficial to a user to perform touch control operation more smoothly and accurately, and can realize 'touch and get'.
The mapping relationship may be preferably set according to the habit of a general user, for example, the viewing area and the touch area may be kept consistent, the mapping may be performed in equal proportion, the upper left sub-area in the viewing area is mapped with the upper left sub-area in the touch area, the lower right sub-area in the viewing area is mapped with the lower right sub-area in the touch area, the middle sub-area in the viewing area is mapped with the middle sub-area in the touch area, and the like. If the shape of the finder area is different from that of the touch area, adaptation processing may be performed during mapping, for example, the area is logically deformed, the finder area is assumed to be rectangular, the touch area is square, the finder area may be logically stretched, the finder area may be mapped with the touch area in the form of a square, and the like.
The setting of the mapping relationships in the previous paragraph is schematically and roughly illustrated. More specific setting schemes are also various, for example, point-to-point mapping, sub-region-to-sub-region mapping, point-to-sub-region mapping, and the like, and mapping may be one-to-one mapping or one-to-many mapping.
For example, in the case of point-to-point mapping, the partial touch area and the partial view area may be respectively one point or a plurality of discrete points, so as to help accurately position the interaction focus. It should be noted that, in practical applications, for a user, no matter whether a part of the touch area is a point or a sub-area, a part of the view area may be one or more sub-areas rather than a point, which is helpful for the user to hear more visual analysis information more efficiently, otherwise, the user may need to perform more touch operations to hear the same amount of visual analysis information.
In the touch area, no matter the touch area is a point or a sub-area, the user can touch and select the touch area according to a predetermined rule, and the specific rule is not specifically limited in the present application, but only by way of example. For the selection of the point, it is easy to understand that it can be generally considered that which point is touched by the user, i.e. which point is selected; for example, if the user touches any point in a certain sub-region, it indicates that the sub-region is selected, or if the continuous sliding track of the user on the certain sub-region exceeds a certain threshold, or if the area of the user enclosing the certain sub-region with the closed continuous track exceeds a certain threshold (for convenience of description, such an operation may be referred to as delineation), it indicates that the sub-region is selected, and so on.
In some embodiments of the present application, the view area may be divided into a number of first sub-areas, the touch area is also correspondingly divided into a number of second sub-areas, and each first sub-area and each second sub-area have a one-to-one mapping relationship, and in this case, the partial touch area selected by touch may include: a second sub-area where a point touched on the touch panel is located, or a second sub-area defined by the touch on the touch panel, and the like.
For example, in an actual application scenario, a viewing area and a touch area are respectively and correspondingly divided into 9 sub-areas according to a nine-grid form, for example, fig. 2 shows a schematic diagram of a user performing touch operation through a touch control device (assumed to be a touch pad) in the actual application scenario, in fig. 2, the user is currently touching a certain point (which may be any point in the sub-area) in a lower right sub-area of the touch area, and accordingly, for an image currently acquired in the viewing area, the wearable smart device broadcasts visual analysis information corresponding to the lower right sub-area of the image to the user, so that the user can conveniently hear the visual analysis information to be known, and can intuitively know that the broadcast is directed at the lower right of the current front view of the user. Furthermore, assuming that the user knows that an obstacle exists in front of the visual field through the visual analysis information, the user knows that the obstacle is obviously located at the lower right side, and the direction feeling is good; similarly, assuming that the user comes to a room, the situation of each orientation in the room can be efficiently known in this way.
In some embodiments of the present application, the wearable smart device interacts with the touch control device based on the communication connection to determine a location of the touch command specified in the image. The interactive content may depend on a work division between the wearable smart device and the touch-controlled device.
For example, assuming that the touch control device is capable of undertaking as much work as possible, for example, when the user performs a touch operation, the touch control device may determine, according to the touch instruction, not only touch position information (which may be in the form of coordinates or a position number, and reflects a position touched by the user), but also further determine, according to the touch position information and the mapping relationship, a point or an area in the mapped image (that is, a position specified by the touch instruction in the image), and notify the wearable smart device, so that the wearable smart device can directly determine the specified position accordingly, and the burden is reduced.
For another example, if the touch control device is weak in capability, for example, when the user performs a touch operation, the touch control device may determine touch position information only according to the touch instruction, send the touch position information to the wearable smart device, and determine a position specified by the touch instruction in the image according to the mapping relationship and the touch position information by the wearable smart device.
It should be noted that, in the above two examples, the determination of the position is mainly used as an interaction purpose, and the interaction content is briefly described, in practical applications, the interaction content may include more content, so as to achieve more possible interaction purposes, for example, the purposes of maintaining a communication connection, controlling a broadcast action (such as playback, pause, volume adjustment, and the like), adjusting the view of the wearable smart device, and the like.
In some embodiments of the present application, if the wearable smart device itself has a visual analysis function, the image may be visually analyzed locally; if the touch control device has the visual analysis function, the touch control device can also be handed to the remote server for visual analysis, and the wearable intelligent device can obtain the visual analysis information returned by the touch control device.
Further, assuming that the wearable smart device performs local analysis, the obtaining, by the wearable smart device, visual analysis information for a specified location may specifically include: the wearable intelligent equipment performs visual analysis on the appointed position by using an image processing algorithm to obtain visual analysis information aiming at the appointed position; or the wearable intelligent device performs visual analysis on all areas in the image by using an image processing algorithm to obtain visual analysis information aiming at all areas, and screens out the visual analysis information aiming at the specified position. The former scheme is favorable to reducing the work load of wearable smart machine, and the latter scheme is favorable to responding to the visual analysis information demand of user to a plurality of different positions in same image in succession high-efficiently.
In some embodiments of the application, on the premise that the touch control device can support, in addition to the touch position, the broadcast of the visual analysis information can be differentially controlled according to other factors such as touch duration, touch force and the like. Even for the same position in the same image, the different touch durations and/or touch strengths may cause the visual analysis information to be broadcast differentially, and such differentiation may indicate differentiation of broadcast contents, or may indicate that the broadcast contents are the same and the broadcast manner (e.g., repetition times, volume, presence or absence of an alarm sound, etc.) is differentiated.
The matching relationship between the factors and the broadcast content or the broadcast mode can be preset, and then broadcast is carried out according to the matching relationship. Based on this, the above-mentioned wearable smart device broadcasts visual analysis information to appointed position, specifically can include: and the wearable intelligent equipment broadcasts the visual analysis information aiming at the specified position and matched with the touch duration and/or the touch strength according to the touch duration and/or the touch strength corresponding to the touch instruction. For example, when the touch duration and/or the touch strength are/is greater than a certain threshold, the wearable smart device broadcasts relatively detailed visual analysis information, otherwise broadcasts relatively brief visual analysis information (e.g., when the touch duration is 2 seconds, whether an obstacle exists at a corresponding position is broadcasted, and when the touch duration is 4 seconds, what the obstacle existing at the corresponding position is broadcasted); for another example, when the touch duration and/or the touch force of the wearable intelligent device is greater than a certain threshold, if it is determined that an obstacle exists in front of the wearable intelligent device, the wearable intelligent device may broadcast information of the obstacle and an alarm sound, otherwise, the wearable intelligent device may not broadcast the alarm sound; and so on.
In addition, the minimum threshold value of the touch duration and/or the touch strength can be set, and if the minimum threshold value is not reached, the user does not need to be broadcasted, so that the user can be prevented from touching by mistake.
In some embodiments of the present application, the user may listen to the announcement of the wearable smart device through an external speaker or headphones. If the earphone is adopted, for the visually impaired people, the bone conduction earphone can be particularly adopted, because the auditory channel of the visually impaired people is more important for the perception world, and the bone conduction earphone can transmit extra sound information to the auditory channel without obstructing the auditory channel, so that the practicability is better.
In some embodiments of the present application, it has been mentioned above that the communication connection may be a wired connection, and if the wired connection has a power supply capability, such as a USB connection or the like, the touch control device may further include a battery for supplying power to the wearable smart device through the wired connection, in which case, the wearable smart device may be provided with no battery or a battery with a relatively small capacity, which helps to reduce the weight of the wearable smart device.
Based on the above description, some embodiments of the present application further provide an exemplary detailed structural schematic diagram of the intelligent system of fig. 1, as shown in fig. 3.
In fig. 3, it is assumed that the wearable smart device 12 is smart glasses, the touch control device 11 is a touch panel, and a touch area of the touch panel corresponds to a viewing area of the smart glasses. The wearable smart device 12 includes at least one camera 121, at least one processor 122, a bluetooth module 123, a USB interface 124, at least one battery 125, and the like. The wearable smart device 12 and the touch control device 11 have a USB connection and/or a bluetooth connection therebetween.
Further, some embodiments of the present application also provide an exemplary workflow diagram of the intelligent system of fig. 3, which is described briefly, as shown in fig. 4.
The flow in fig. 4 may include the following steps:
starting an intelligent system; the intelligent glasses collect images aiming at the view finding area; the touch pad receives input (a touch instruction given by a user through touch operation), and the input is determined and then sent to the intelligent glasses; if the intelligent glasses receive the input, determining the designated position on the acquired image according to the input, performing visual analysis on the designated position to obtain visual analysis information of the designated position and broadcasting the visual analysis information to a user, at least informing the user whether an object exists at the designated position through broadcasting, and if so, further informing the user of related information of the existing object; after broadcasting is finished, images can be continuously acquired aiming at the viewing area so as to continuously assist the user.
Some embodiments of the present application also provide another exemplary workflow diagram of the intelligent system of fig. 3, which is described briefly, as shown in fig. 5.
The flow in fig. 5 may include the following steps:
starting an intelligent system; the intelligent glasses collect images aiming at the viewing area and perform visual analysis on all areas in the collected images; the touch pad receives input, and the input is sent to the intelligent glasses after being determined; if the intelligent glasses receive the input, determining the designated position on the acquired image according to the input, screening out visual analysis information aiming at the designated position from the visual analysis information obtained through visual analysis, broadcasting the visual analysis information of the designated position to a user, at least informing the user whether an object exists at the designated position through broadcasting, and if so, further informing the user of the related information of the existing object; after broadcasting is finished, images can be continuously acquired aiming at the viewing area so as to continuously assist the user.
Based on the same idea, some embodiments of the present application further provide a nonvolatile computer storage medium corresponding to the intelligent system.
Some embodiments of the present application provide a non-volatile computer storage medium for use in the smart system of fig. 1, the medium being located partially on a touch control device and partially on a wearable smart device, and storing computer-executable instructions configured to:
enabling the touch control equipment to receive a touch instruction of a user, and interacting with the wearable intelligent equipment according to the touch instruction;
and enabling the wearable intelligent equipment to acquire images in a viewing area, determining the position appointed by the touch instruction in the images through the interaction, and acquiring and broadcasting the visual analysis information aiming at the appointed position.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the media embodiment, since it is substantially similar to the system embodiment, the description is simple, and reference may be made to part of the description of the system embodiment for relevant points.
The medium provided by the embodiment of the present application corresponds to the system, and therefore, the medium also has beneficial technical effects similar to those of the corresponding system, and since the beneficial technical effects of the system have been described in detail above, the beneficial technical effects of the medium are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. An intelligent system for assisting a user is characterized by comprising a touch control device and a wearable intelligent device which is in communication connection with the touch control device;
the touch control equipment is used for receiving a touch instruction of a user and interacting with the wearable intelligent equipment according to the touch instruction;
the wearable intelligent device is used for acquiring images in a view area, determining the position appointed by the touch instruction in the images through the interaction, and acquiring and broadcasting the visual analysis information aiming at the appointed position.
2. The smart system of claim 1, wherein the wearable smart device comprises smart glasses, the viewing area comprising at least a partial field of view of the smart glasses.
3. The intelligent system according to claim 1, wherein the touch control device comprises a touch panel, and the touch instruction is given by a user touching a touch area of the touch panel;
the viewing area and the touch area have a mapping relation, and the specified position comprises: and when the user issues the touch instruction, the user touches and selects a part of the view finding area mapped by the part of the touch area.
4. The intelligent system according to claim 3, wherein the partial touch area and the partial view area are respectively one point or a plurality of discrete points.
5. The intelligent system according to claim 3, wherein the viewing area is divided into a number of first sub-areas, the touch area is also divided into a number of second sub-areas, and each of the first sub-areas and each of the second sub-areas have a one-to-one mapping relationship;
the partial touch area selected by touch includes: a second sub-area on which a point touched on the touch panel is located, or a second sub-area defined by the touch on the touch panel.
6. The intelligent system according to claim 3, wherein the touch control device interacts with the wearable intelligent device according to the touch instruction, specifically including:
the touch control equipment determines touch position information according to the touch instruction and sends the touch position information to the wearable intelligent equipment;
and the wearable intelligent equipment determines the position specified by the touch instruction in the image according to the mapping relation and the received touch position information.
7. The intelligent system according to claim 3, wherein the wearable smart device obtains visual analysis information for the specified location, including in particular:
the wearable intelligent equipment performs visual analysis on the appointed position by using an image processing algorithm to obtain visual analysis information aiming at the appointed position; alternatively, the first and second electrodes may be,
and the wearable intelligent equipment performs visual analysis on all areas in the image by using an image processing algorithm to obtain visual analysis information aiming at all areas, and screens out the visual analysis information aiming at the specified position.
8. The intelligent system of claim 1, wherein the visual analysis information for the specified location comprises: and acquiring related information of the object existing in the specified position through visual analysis, wherein the related information comprises obstacle information and/or identification object information, the obstacle information comprises distance and/or direction, and the identification object information comprises at least one of characters, object types and attributes.
9. The intelligent system according to claim 1, wherein the wearable smart device broadcasts visual analysis information for the specified location, specifically comprising:
and the wearable intelligent equipment broadcasts visual analysis information aiming at the appointed position and matched with the touch duration and/or the touch strength according to the touch duration and/or the touch strength corresponding to the touch instruction.
10. The intelligent system according to claim 9, wherein the wearable intelligent device broadcasts relatively detailed visual analysis information when the duration and/or strength of the touch is greater than a threshold, and broadcasts relatively abbreviated visual analysis information otherwise.
11. The intelligent system of claim 1, wherein the wearable smart device comprises a bone conduction headset to broadcast visual analytics information for the designated location to a user.
12. The intelligent system according to claim 1, wherein the communication connection is a powerable wired connection or a short-range wireless communication connection;
if the communication connection is a power-suppliable wired connection, the touch control device includes a battery for supplying power to the wearable smart device through the power-suppliable wired connection.
13. A non-transitory computer storage medium for use in a smart system for assisting a user, the non-transitory computer storage medium storing computer-executable instructions, the smart system comprising a touch control device, a wearable smart device in communication with the touch control device, the computer-executable instructions configured to:
enabling the touch control equipment to receive a touch instruction of a user, and interacting with the wearable intelligent equipment according to the touch instruction;
and enabling the wearable intelligent equipment to acquire images in a viewing area, determining the position appointed by the touch instruction in the images through the interaction, and acquiring and broadcasting the visual analysis information aiming at the appointed position.
CN201811124734.5A 2018-09-26 2018-09-26 Intelligent system for assisting user and medium applied to intelligent system Pending CN110955348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811124734.5A CN110955348A (en) 2018-09-26 2018-09-26 Intelligent system for assisting user and medium applied to intelligent system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811124734.5A CN110955348A (en) 2018-09-26 2018-09-26 Intelligent system for assisting user and medium applied to intelligent system

Publications (1)

Publication Number Publication Date
CN110955348A true CN110955348A (en) 2020-04-03

Family

ID=69964614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811124734.5A Pending CN110955348A (en) 2018-09-26 2018-09-26 Intelligent system for assisting user and medium applied to intelligent system

Country Status (1)

Country Link
CN (1) CN110955348A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157139A (en) * 2021-05-20 2021-07-23 深圳市亿家网络有限公司 Auxiliary operation method and operation device for touch screen
CN114549974A (en) * 2022-01-26 2022-05-27 西宁城市职业技术学院 Interaction method of multiple intelligent devices based on user

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157139A (en) * 2021-05-20 2021-07-23 深圳市亿家网络有限公司 Auxiliary operation method and operation device for touch screen
CN113157139B (en) * 2021-05-20 2024-02-27 深圳市亿家网络有限公司 Touch screen auxiliary operation method and operation device
CN114549974A (en) * 2022-01-26 2022-05-27 西宁城市职业技术学院 Interaction method of multiple intelligent devices based on user

Similar Documents

Publication Publication Date Title
US11024083B2 (en) Server, user terminal device, and control method therefor
US9563272B2 (en) Gaze assisted object recognition
US11164546B2 (en) HMD device and method for controlling same
CN107390863B (en) Device control method and device, electronic device and storage medium
KR20150028084A (en) Display device and operation method thereof
JP2018530016A (en) VR control method, apparatus, electronic device, program, and recording medium
US10630887B2 (en) Wearable device for changing focal point of camera and method thereof
KR20170051013A (en) Tethering type head mounted display and method for controlling the same
KR102091604B1 (en) Mobile terminal and method for controlling the same
CN111970456B (en) Shooting control method, device, equipment and storage medium
KR20140033896A (en) Mobile terminal and method for controlling of the same
KR20150041453A (en) Wearable glass-type image display device and control method thereof
JP6423129B1 (en) Smart device control method and apparatus
KR20170089662A (en) Wearable device for providing augmented reality
CN104301661A (en) Intelligent household monitoring method and client and related devices
CN110782532B (en) Image generation method, image generation device, electronic device, and storage medium
CN107092359A (en) Virtual reality visual angle method for relocating, device and terminal
CN208689558U (en) A kind of intelligence system assisting user
CN110955348A (en) Intelligent system for assisting user and medium applied to intelligent system
WO2019119290A1 (en) Method and apparatus for determining prompt information, and electronic device and computer program product
KR20170055296A (en) Tethering type head mounted display and method for controlling the same
US11756302B1 (en) Managing presentation of subject-based segmented video feed on a receiving device
CN111782053B (en) Model editing method, device, equipment and storage medium
KR20130065074A (en) Electronic device and controlling method for electronic device
KR20140074498A (en) Mobile terminal and method for controlling of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination