CN111182202A - Content identification method based on wearable device and wearable device - Google Patents

Content identification method based on wearable device and wearable device Download PDF

Info

Publication number
CN111182202A
CN111182202A CN201911088886.9A CN201911088886A CN111182202A CN 111182202 A CN111182202 A CN 111182202A CN 201911088886 A CN201911088886 A CN 201911088886A CN 111182202 A CN111182202 A CN 111182202A
Authority
CN
China
Prior art keywords
content
host
wearable device
identification
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911088886.9A
Other languages
Chinese (zh)
Other versions
CN111182202B (en
Inventor
施锐彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201911088886.9A priority Critical patent/CN111182202B/en
Publication of CN111182202A publication Critical patent/CN111182202A/en
Application granted granted Critical
Publication of CN111182202B publication Critical patent/CN111182202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A content identification method based on a wearable device and the wearable device are provided, the wearable device comprises a smart host and a host support, the smart host can rotate at any angle within 360 degrees when standing up and being vertical to the host support, the method comprises the following steps: when the intelligent host is erected to be vertical to the host bracket, detecting whether a touch area of a user is positioned at the edge of a shooting visual angle of any shooting module of the intelligent host; if yes, controlling the intelligent host to rotate to the position where the shooting central point of any shooting module of the intelligent host is closest to the touch area when the intelligent host stands to be vertical to the host bracket; controlling any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image; and performing content identification on the shot image. According to the embodiment of the application, the accuracy rate of content identification by using wearable equipment can be improved.

Description

Content identification method based on wearable device and wearable device
Technical Field
The application relates to the technical field of wearable equipment, in particular to a content identification method based on wearable equipment and the wearable equipment.
Background
Currently, more and more wearable devices (such as telephone watches) are equipped with the camera, can realize abundant function. In practice, it is found that when a camera of a wearable device (such as a telephone watch) is used to implement a content recognition function, if a touch region of a user to a shooting angle of the camera is located at an edge of the shooting angle, a shot image is prone to distortion, so that content in the shot image is difficult to be accurately recognized by the wearable device (such as the telephone watch), and thus the content recognition accuracy is reduced.
Disclosure of Invention
The embodiment of the application discloses a content identification method based on wearable equipment and the wearable equipment, and the accuracy of content identification by utilizing the wearable equipment can be improved.
The embodiment of the application discloses in a first aspect a content identification method based on a wearable device, wherein the wearable device comprises a smart host and a host support, the smart host can rotate at any angle within a range of 360 degrees when standing up to be perpendicular to the host support, and the method comprises the following steps:
when the intelligent host is erected to be vertical to the host bracket, detecting whether a touch area of a user is positioned at the edge of a shooting visual angle of any shooting module of the intelligent host;
if yes, controlling the intelligent host to rotate to a shooting central point of any shooting module of the intelligent host to be closest to the touch area when the intelligent host is erected to be vertical to the host bracket;
any shooting module of the intelligent host with the shooting central point closest to the touch area is controlled to shoot the touch area, and a shot image is obtained;
and performing content identification on the shot image.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after performing content identification on the captured image, the method further includes:
outputting the identification content of the shot image to a screen of the intelligent host for display;
controlling the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
As another optional implementation manner, in the first aspect of this embodiment of the present application, after the controlling the smart host to rotate to the face of the screen facing the wearer of the wearable device so as to facilitate the wearer to view the identification content of the captured image, the method further includes:
acquiring an operation instruction sent by the wearer;
if the operation instruction indicates that the identification content is read, extracting text content in the identification content;
and acquiring click-to-read content selected by the wearer from the text content;
and broadcasting the point reading content.
As another optional implementation manner, in the first aspect of this embodiment of the present application, the method further includes:
if the operation instruction indicates that the topic searching is carried out on the identification content, extracting the image-text content in the identification content;
searching question and answer information matched with the image-text content from a networked question bank;
and outputting the question answering information to learning equipment connected with the wearable equipment for display.
As another optional implementation manner, in the first aspect of this embodiment of this application, after extracting the teletext content in the identification content, and before searching for question and answer information matching the teletext content from a networked question bank, the method further includes:
identifying the semantics of the image-text content, and judging whether the image-text content contains complete title information according to the semantics; if not, adjusting the shooting visual angle of any shooting module of the intelligent host to contain the complete question information, and executing the step of searching question and answer information matched with the image-text content from a networked question bank.
In a second aspect, the embodiment of the present application discloses a wearable device, where the wearable device includes a smart host and a host support, and is characterized in that the smart host can rotate at any angle within a range of 360 ° when standing up to be perpendicular to the host support, and the smart host includes:
the touch detection unit is used for detecting whether a touch area of a user is positioned at the edge of a shooting visual angle of any shooting module of the intelligent host when the intelligent host is erected to be vertical to the host bracket;
the first control unit is used for controlling the intelligent host to rotate to a position that a shooting central point of any shooting module of the intelligent host is closest to the touch area when the intelligent host is erected to be vertical to the host bracket when the touch detection unit detects that the touch area of a user is located at the edge of a shooting visual angle of any shooting module of the intelligent host;
the second control unit is used for controlling any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image;
and the content identification unit is used for carrying out content identification on the shot image.
As an optional implementation manner, in the second aspect of this embodiment of the present application, the smart host further includes:
the first output unit is used for outputting the identification content of the shot image to the screen of the intelligent host for display after the content identification unit identifies the content of the shot image;
and the third control unit is used for controlling the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
As another optional implementation manner, in the second aspect of the embodiment of the present application, the smart host further includes:
the first obtaining unit is used for obtaining an operation instruction sent by the wearer after the third control unit controls the intelligent host to rotate to the face of the wearer with the screen facing the wearable device so that the wearer can conveniently view the identification content of the shot image;
a first extracting unit, configured to extract text content in the identification content when the operation instruction sent by the wearer and acquired by the first acquiring unit indicates to perform point reading on the identification content;
a second acquisition unit, configured to acquire click-to-read content selected by the wearer from the text content;
and the broadcasting unit is used for broadcasting the point-reading content.
As another optional implementation manner, in the second aspect of the embodiment of the present application, the smart host further includes:
the second extraction unit is used for extracting the image-text content in the identification content when the operation instruction sent by the wearer and acquired by the first acquisition unit indicates to search the identification content;
the network searching unit is used for searching question and answer information matched with the image-text content from a network question bank;
and the second output unit is used for outputting the question answering information to the learning equipment connected with the wearable equipment for display.
As another optional implementation manner, in the second aspect of the embodiment of the present application, the smart host further includes:
the semantic recognition unit is used for recognizing the semantics of the image-text content and judging whether the image-text content contains complete question information according to the semantics before the networked searching unit searches question and answer information matched with the image-text content from a networked question bank after the second extraction unit extracts the image-text content in the recognized content;
and the adjusting unit is used for adjusting the shooting visual angle of any shooting module of the intelligent host to contain the complete topic information when the semantic identification unit judges that the image-text content does not contain the complete topic information, and triggering the networking searching unit to search and contain the image-text content matched topic question-answer information of the complete topic information from the networking topic library.
The third aspect of the embodiments of the present application discloses another wearable device, where the wearable device includes a smart host and a host stand, where the smart host can rotate by any angle within a range of 360 ° when standing up to be perpendicular to the host stand, and the smart host includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of any one of the wearable device-based content identification methods disclosed in the first aspect of the embodiments of the present application.
A fourth aspect of the present embodiment discloses a computer-readable storage medium storing a computer program, where the computer program enables a computer to execute all or part of the steps in any one of the wearable device-based content identification methods disclosed in the first aspect of the present embodiment.
A fifth aspect of the embodiments of the present application discloses a computer program product, which, when running on a computer, causes the computer to execute all or part of the steps of any one of the wearable device-based content identification methods in the first aspect of the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects: in the embodiment of the application, the wearable device can comprise an intelligent host and a host bracket, wherein the intelligent host can rotate at any angle within a range of 360 degrees when standing up and being vertical to the host bracket; when the intelligent host is erected to be vertical to the host bracket, whether a touch area of a user is positioned at the edge of a shooting visual angle of any shooting module of the intelligent host can be detected; if so, the intelligent host can be controlled to rotate to the position where the shooting central point of any shooting module of the intelligent host is closest to the touch area when the intelligent host is erected to be vertical to the host bracket; any shooting module of the intelligent host with the shooting central point closest to the touch area can be controlled to shoot the touch area to obtain a shot image; on the basis of this, the content recognition is performed on the captured image. It is thus clear that, implement this application embodiment, can change its shooting visual angle of shooting the module through the intelligent host computer of rotatory wearable equipment to make originally the user that is in shooting visual angle edge touch the region and be close to as far as possible and shoot the central point, reduce the distortion that the shooting image of shooing produced, and then reduce because of the distortion brings the adverse effect when carrying out content recognition to the shooting image, thereby can promote and utilize wearable equipment to carry out content recognition's rate of accuracy.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, those skilled in the art will understand that other drawings can be obtained according to the drawings required to be used in the embodiments without creative efforts.
Fig. 1 is a schematic flowchart of a content identification method based on a wearable device disclosed in an embodiment of the present application;
fig. 2 is a schematic flowchart of another content identification method based on a wearable device disclosed in an embodiment of the present application;
fig. 3 is a schematic flowchart of another content identification method based on a wearable device disclosed in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;
fig. 5 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present application;
fig. 6 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present application;
fig. 7 is a schematic structural diagram of another wearable device disclosed in the embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "connected" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The embodiment of the application discloses a content identification method based on wearable equipment and the wearable equipment, and the accuracy of content identification by utilizing the wearable equipment can be improved. The following detailed description is made with reference to the accompanying drawings.
In order to better understand the control method of the wearable device disclosed in the embodiment of the present application, a wearable device to which the method is applied will be described below. The wearable device applicable to the content identification method based on the wearable device disclosed by the embodiment of the application can comprise an intelligent host, a host bracket and a side belt; the first end of the host bracket is connected with the first end of the side belt in a plugging and pulling mode, and the second end of the host bracket is also connected with the second end of the side belt in a plugging and pulling mode; the first end (rotating end) of the intelligent host is movably connected with the first end of the host bracket through the rotating ball, and the second end (free end) of the intelligent host is not connected. Under normal conditions, the intelligent host can be stacked on the host bracket, namely, the bottom side of the intelligent host is attached to the upper surface of the host bracket; when the intelligent host machine is turned over at different angles relative to the host machine support through the rotating ball, a certain angle (which can be adjusted between 0-180 degrees) is formed between the bottom side of the intelligent host machine and the upper surface of the host machine support.
When the intelligent host is erected to be perpendicular to the host bracket (namely the bottom side of the intelligent host forms 90 degrees with the upper surface of the host bracket), the intelligent host can also rotate at any angle within the range of 360 degrees through the rolling ball.
It should be noted that the wearable device described above is only one implementation of the wearable device to which the content identification method based on the wearable device disclosed in the embodiment of the present application is applied, and should not be construed as a limitation to the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a content identification method based on a wearable device according to an embodiment of the present disclosure. The wearable device comprises an intelligent host and a host support, wherein the intelligent host can rotate at any angle within a range of 360 degrees when standing up to be perpendicular to the host support. As shown in fig. 1, the content recognition method may include the steps of:
101. when the intelligent host of the wearable device is erected to be vertical to the host bracket of the intelligent host, the wearable device detects whether the touch area of the user is positioned at the edge of the shooting visual angle of any shooting module of the intelligent host, and if so, the steps 102 to 104 are executed; otherwise, the flow is ended.
In this application embodiment, when wearable equipment's intelligent host computer stands up and is perpendicular with its host computer support, this intelligent host computer's arbitrary shooting module can be in the state of shooing in advance to export it and shoot the visual angle and show on this intelligent host computer's the screen. For example, the rear-mounted shooting module of the intelligent host is in a pre-shooting state, and outputs the shooting angle to the screen of the intelligent host for displaying; for another example, the front-end shooting module and the rear-end shooting module of the smart host may be in a pre-shooting state at the same time, and on the basis, if any one of the shooting modules detects the face of the wearer of the wearable device, the shooting angle of the other shooting module is output to the screen of the smart host for display.
Further, the detecting whether the user's touch area is located at an edge of a shooting angle of any shooting module of the smart host specifically may include: detecting touch operation of a user on a screen of the intelligent host; when the touch operation is detected, acquiring a touch area of the touch operation; and judging whether the distance between the touch area and the shooting center point of any shooting module of the intelligent host is greater than a preset certain threshold value or not, if so, judging that the touch area is positioned at the edge of the shooting view angle of any shooting module of the intelligent host, and further executing the step 102 to the step 104.
102. The wearable device controls the intelligent host to rotate to a shooting central point of any shooting module of the intelligent host to be closest to the touch area when the intelligent host stands to be vertical to the host bracket of the wearable device.
In this embodiment of the application, the wearable device may calculate a target rotation direction and a target rotation angle of the smart host according to the coordinate information of the touch area and a current rotation angle (within a 360 ° range) of the smart host; and controlling the intelligent host to rotate correspondingly according to the target rotation direction and the target rotation angle so that the shooting central point of any shooting module of the intelligent host is closest to the touch area.
By implementing the method, the user touch area which is originally positioned at the edge of the shooting visual angle is close to the shooting central point as much as possible, the distortion of the shot image is reduced, and further, when the next step 103 to step 104 are executed, the adverse effect of content identification on the shot image caused by the distortion is reduced, so that the accuracy of content identification by using wearable equipment can be improved.
103. And the wearable equipment controls any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image.
As an alternative implementation, the wearable device may detect a voice keyword uttered by the user, and perform a corresponding shooting operation according to the voice keyword. For example, when a voice keyword "take a picture" is detected, any one of the shooting modules of the intelligent host with the shooting central point closest to the touch area may be controlled to shoot the touch area, so as to obtain a shot image; for another example, when a voice keyword "continuous shooting" is detected, any one of the shooting modules can be controlled to continuously shoot according to preset shooting times and shooting time intervals; for another example, when the voice keywords "2 seconds later" and "take a picture" are detected, any one of the shooting modules may be controlled to shoot after 2 seconds of delay.
As another optional implementation, the wearable device may further perform a corresponding shooting operation according to a touch operation of the user on the screen of the smart host. For example, when the touching operation is single touch, any one of the shooting modules of the smart host whose shooting center point is closest to the touching area may be controlled to shoot the touching area, so as to obtain a shot image; for another example, when the touch operation is a double click, any one of the shooting modules may be controlled to shoot after delaying a preset time; for another example, when the touch operation is a long press, any one of the shooting modules may be controlled to perform shooting after auto-focusing.
104. The wearable device performs content recognition on the shot image.
In this embodiment of the application, the wearable device performs content identification on the captured image, which may specifically include: and sequentially passing the shot image through a preprocessing model and a content identification model to obtain the identification content of the shot image. The preprocessing model can perform the steps of image-text segmentation, graying, binarization, noise reduction, deviation rectification, character cutting, text combination and the like, and the embodiment of the application is not particularly limited.
Therefore, the content identification method described in fig. 1 can improve the accuracy of content identification by using a wearable device.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another content identification method based on a wearable device according to an embodiment of the present application. The wearable device comprises an intelligent host and a host support, wherein the intelligent host can rotate at any angle within a range of 360 degrees when standing up to be perpendicular to the host support. As shown in fig. 2, the content recognition method may include the steps of:
201. when the intelligent host of the wearable device is erected to be vertical to the host bracket of the intelligent host, the wearable device detects whether the touch area of the user is positioned at the edge of the shooting visual angle of any shooting module of the intelligent host, and if so, the steps 202 to 210 are executed; otherwise, the flow is ended.
202. The wearable device controls the intelligent host to rotate to a shooting central point of any shooting module of the intelligent host to be closest to the touch area when the intelligent host stands to be vertical to the host bracket of the wearable device.
203. And the wearable equipment controls any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image.
204. The wearable device performs content recognition on the shot image.
205. The wearable device outputs the identification content of the shot image to a screen of the intelligent host computer of the wearable device for display.
As an optional implementation, the wearable device may further output the identification content of the captured image to a learning device connected to the wearable device for display. For example, the learning device may include various devices or systems with a display screen (e.g., a learning tablet, a smart speaker with a screen, etc.), and the embodiment of the present application is not limited in particular.
206. The wearable device controls the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
For example, the wearable device may determine whether any of the shooting modules of the smart host detects a face of a wearer of the wearable device; if the face of the wearer is detected, the direction of the face of the wearer is obtained, and the intelligent host is controlled to rotate to the direction, so that the screen of the intelligent host faces the face of the wearer, and the wearer can conveniently view the identification content of the shot image.
For another example, the wearable device may further detect a voice command issued by the user, and determine whether a voiceprint feature of the voice command matches a voiceprint feature of a wearer of the wearable device; if the voice command is matched with the voice command, the direction of the sound source sending the voice command is obtained, and the intelligent host is controlled to rotate to the direction.
207. The wearable device acquires an operation instruction sent by the wearer.
For example, the wearable device may detect a voice instruction issued by the wearer, and obtain an operation instruction corresponding to the voice instruction through a keyword detection technology or a natural language processing technology; the wearable device can also detect the touching operation of the wearer on the screen of the smart host of the wearable device, and acquire an operation instruction corresponding to the touching region of the touching operation.
208. If the operation instruction indicates to read the identification content, the wearable device extracts the text content in the identification content.
209. The wearable device obtains click-to-read content selected by its wearer from the text content.
210. The wearable device broadcasts the click-to-read content.
By implementing the method, the wearable device can realize the reading function of the identification content on the basis of identifying the content of the shot image, so that the use scene of the wearable device can be expanded, and the practicability of the wearable device is improved.
As an optional implementation manner, after the wearable device performs step 201 to step 210 and broadcasts the click-to-read content, the wearable device may further perform the following steps: when the click-to-read content is an English word, the relevant information of the English word can be searched in a networked manner, wherein the relevant information can comprise the pronunciation, translation and example sentence of the English word; outputting the related information to a screen of an intelligent host of the wearable device, and displaying the related information near the English word in a floating window mode; broadcasting the related information in sequence; furthermore, the wearable device can also detect the reading-after of the wearer to the English word, judge whether the reading-after is correct, and output corresponding scores. By implementing the method, the wearable device can help a wearer to conveniently look up and translate English learning materials, meanwhile, learning of unfamiliar English words is achieved, and the practicability of the wearable device is improved.
Therefore, the content identification method described in fig. 2 can improve the accuracy of content identification by using a wearable device.
In addition, by implementing the content identification method described in fig. 2, the point-reading function can be realized by using the wearable device, so that the use scene of the wearable device is expanded, and the practicability of the wearable device is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a content identification method based on a wearable device according to an embodiment of the present application. The wearable device comprises an intelligent host and a host support, wherein the intelligent host can rotate at any angle within a range of 360 degrees when standing up to be perpendicular to the host support. As shown in fig. 3, the content recognition method may include the steps of:
301. when the intelligent host of the wearable device is erected to be vertical to the host bracket of the intelligent host, the wearable device detects whether the touch area of the user is positioned at the edge of the shooting visual angle of any shooting module of the intelligent host, and if so, the steps 302-309 are executed; otherwise, the flow is ended.
302. The wearable device controls the intelligent host to rotate to a shooting central point of any shooting module of the intelligent host to be closest to the touch area when the intelligent host stands to be vertical to the host bracket of the wearable device.
303. And the wearable equipment controls any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image.
304. The wearable device performs content recognition on the shot image.
305. The wearable device outputs the identification content of the shot image to a screen of the intelligent host computer of the wearable device for display.
306. The wearable device controls the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
307. The wearable device acquires an operation instruction sent by the wearer.
308. And if the operation instruction indicates that the topic searching is carried out on the identification content, the wearable equipment extracts the image-text content in the identification content.
309. The wearable device identifies the semantics of the image-text content, judges whether the image-text content contains complete title information according to the semantics, and executes the step 310 to the step 312 if the image-text content does not contain complete title information; otherwise, step 311 to step 312 are executed.
For example, when the wearable device detects that the teletext content has incomplete characters or sentences that do not meet the preset grammar standard, it may be determined that the teletext content does not contain complete topic information.
310. Wearable equipment adjusts the shooting visual angle of any shooting module of its smart host to containing above-mentioned complete title information.
Exemplarily, the wearable device can enlarge the shooting visual angle of any shooting module of the intelligent host so as to enable the wearable device to contain the complete subject information; also can rotate its intelligent host to the shooting visual angle of any shooting module of continuous monitoring in rotatory process, until it contains above-mentioned complete title information.
311. And the wearable equipment searches question and answer information matched with the image-text content from the networking question bank.
312. The wearable device outputs the question answering information to the learning device connected with the wearable device for displaying.
In the embodiment of the application, by executing the steps 308 to 312, the wearable device can realize the function of photographing and searching questions, so that the wearer of the wearable device can learn autonomously, and learning enthusiasm is improved.
As an optional implementation manner, when the wearable device executes the step 308, if the obtained operation instruction issued by the wearer of the wearable device indicates to search and modify the identification content, the following steps may be further executed: judging and respectively extracting question information and answering information in the identification content; searching question and answer information matched with the question information from a networked question bank; according to the question and answer information, correcting the answer information to obtain a correcting result; and outputting the correction result to the learning equipment connected with the wearable equipment for display.
Furthermore, if the batch modification result shows that the response information has errors, the response information with errors can be highlighted on the learning device so as to be convenient for the user to view.
By implementing the method, the wearable device can further help the wearer of the wearable device to independently learn and cultivate the ability and enthusiasm of the wearer to independently solve the problems.
Therefore, the content identification method described in fig. 3 can improve the accuracy of content identification by using a wearable device.
In addition, by implementing the content identification method described in fig. 3, the wearable device can be used to implement a question searching function, thereby helping the wearer of the wearable device to perform autonomous learning and improving learning enthusiasm.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a wearable device disclosed in the embodiment of the present application. The wearable device comprises an intelligent host and a host support, wherein the intelligent host can rotate at any angle within a range of 360 degrees when standing up to be perpendicular to the host support. As shown in fig. 4, the smart host of the wearable device may include:
the touch detection unit 401 is configured to detect whether a touch area of a user is located at an edge of a shooting view angle of any shooting module of the wearable device when the wearable device is erected vertically to a host bracket of the wearable device;
a first control unit 402, configured to, when the touching detection unit 401 detects that the touching area of the user is located at an edge of a shooting angle of any shooting module of the smart host, control the smart host to rotate to a position where a shooting center point of any shooting module of the smart host is closest to the touching area when the smart host stands up to be perpendicular to the host bracket;
a second control unit 403, configured to control any shooting module of the smart host whose shooting center point is closest to the touch area to shoot the touch area, so as to obtain a shot image;
a content recognition unit 404 for performing content recognition on the captured image.
It can be seen that, with the wearable device described in fig. 4, the user touch area originally at the edge of the shooting angle of view can be made as close as possible to the shooting center point, so as to reduce distortion generated by the shot image, and further reduce adverse effects caused by the distortion when the content identification unit 404 identifies the content of the shot image, thereby improving the accuracy of content identification by using the wearable device.
Referring to fig. 5, fig. 5 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present application. Wherein, the wearable device shown in fig. 5 is optimized by the wearable device shown in fig. 4. Compared to the wearable device shown in fig. 4, the smart host of the wearable device shown in fig. 5 further includes:
a first output unit 405, configured to output, after the content recognition unit 404 performs content recognition on the captured image, recognition content of the captured image to a screen of a smart host of a wearable device for display;
a third control unit 406, configured to control the smart host to rotate to a face of the wearable device facing the screen, so as to facilitate the wearer to view the identification content of the captured image.
A first obtaining unit 407, configured to obtain an operation instruction sent by the wearer after the third controlling unit 406 controls the smart host to rotate to the face of the wearer facing the screen of the wearable device, so as to facilitate the wearer to view the identification content of the captured image;
a first extracting unit 408, configured to extract a text content in the identification content when the operation instruction issued by the wearer and acquired by the first acquiring unit 407 indicates to perform point reading on the identification content;
a second obtaining unit 409, configured to obtain click-to-read content selected by the wearer from the text content;
and a broadcasting unit 410 for broadcasting the click-to-read content.
As an optional implementation manner, after the broadcasting unit 410 broadcasts the click-to-read content, if the click-to-read content is an english word, the wearable device may further: searching relevant information of the English words in a networking manner, wherein the relevant information can comprise pronunciation, translation and example sentences of the English words; on the basis, the first output unit 405 outputs the related information to the screen of the smart host of the wearable device, and displays the related information near the english word in a floating window manner; the broadcasting unit 410 broadcasts the related information in sequence; further, the wearable device can also detect the reading-after of the wearer to the english word, and judge whether the reading-after is correct, and output a corresponding score through the first output unit 405 or the broadcasting unit 410. By implementing the method, the wearable device can help a wearer of the wearable device conveniently look up and translate English learning materials on the basis of realizing the point reading function by utilizing the wearable device, simultaneously learn the unfamiliar English words, and improve the practicability of the wearable device.
It can be seen that, by implementing the wearable device described in fig. 5, the accuracy of content identification by the wearable device can be improved.
In addition, the wearable device described in fig. 5 can be implemented to realize a point-reading function by using the wearable device, so that the use scene of the wearable device is expanded, and the practicability of the wearable device is improved
Referring to fig. 6, fig. 6 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present application. The wearable device shown in fig. 6 is optimized by the wearable device shown in fig. 5. Compared to the wearable device shown in fig. 5, the smart host of the wearable device shown in fig. 6 further includes:
a second extracting unit 411, configured to extract the graphics content in the identification content when the operation instruction issued by the wearer and acquired by the first acquiring unit 407 indicates to perform a search for the identification content;
a semantic recognition unit 412, configured to recognize semantics of the image-text content after the second extraction unit 411 extracts the image-text content in the recognized content, and determine whether the image-text content includes complete title information according to the semantics;
an adjusting unit 413, configured to adjust a shooting view angle of any shooting module of the smart host of the wearable device to include the complete topic information when the semantic identifying unit 412 determines that the image-text content does not include the complete topic information;
a networking search unit 414, configured to search question and answer information matching the image-text content from a networking question bank when the semantic recognition unit 412 determines that the image-text content contains complete question information, and when the semantic recognition unit 412 determines that the image-text content does not contain complete question information, and the adjustment unit 413 adjusts a shooting view angle of any shooting module of the smart host of the wearable device to include the complete question information;
and a second output unit 415, configured to output the question and answer information to a learning device connected to the wearable device for display.
In this application embodiment, through implementing above-mentioned wearable equipment, can realize shooing the function of searching for the question to help this wearable equipment's the person of wearing to independently study, promote the study enthusiasm.
As an optional implementation manner, if the operation instruction sent by the wearer of the wearable device and acquired by the first acquiring unit 407 indicates to search and modify the identification content, the second extracting unit 411 may further distinguish and extract the title information and the answering information in the identification content, respectively; next, the networked searching unit 414 searches the networked question bank for question and answer information matching the question information; according to the question and answer information, the wearable equipment can correct the answer information to obtain a correction result; on the basis, the second output unit 415 outputs the correction result to the learning device connected to the wearable device for display.
Further, if the batch modification result indicates that the response information has an error, the second output unit 415 may highlight the response information having the error on the learning device for the user to view.
By implementing the wearable device, the wearable device can further help a wearer of the wearable device to independently learn and cultivate the ability and the enthusiasm of the wearer to independently solve the problems.
It can be seen that, by implementing the wearable device described in fig. 6, the accuracy of content identification by the wearable device can be improved.
In addition, the wearable device described in fig. 6 can be implemented to realize a question searching function by using the wearable device, thereby helping a wearer of the wearable device to perform autonomous learning and improving learning enthusiasm.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another wearable device disclosed in the embodiment of the present application. The wearable device comprises an intelligent host and a host support, wherein the intelligent host can rotate at any angle within a range of 360 degrees when standing up to be perpendicular to the host support. As shown in fig. 7, the smart host may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute all or part of the steps in any one of the content identification methods based on the wearable device in fig. 1 to 3.
In addition, the embodiment of the application further discloses a computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program enables a computer to execute all or part of the steps in any one of the content identification methods based on the wearable device in fig. 1 to 3.
In addition, the embodiment of the present application further discloses a computer program product, which when running on a computer, causes all or part of the steps in any one of the methods for content identification based on a wearable device in fig. 1 to fig. 3 to be performed.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The content identification method based on the wearable device and the wearable device disclosed in the embodiments of the present application are described in detail above, and specific examples are applied in the present application to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A content recognition method based on a wearable device, wherein the wearable device comprises a smart host and a host bracket, the smart host can rotate within 360 degrees in any angle when standing up to be perpendicular to the host bracket, and the method comprises the following steps:
when the intelligent host is erected to be vertical to the host bracket, detecting whether a touch area of a user is positioned at the edge of a shooting visual angle of any shooting module of the intelligent host;
if yes, controlling the intelligent host to rotate to a shooting central point of any shooting module of the intelligent host to be closest to the touch area when the intelligent host is erected to be vertical to the host bracket;
any shooting module of the intelligent host with the shooting central point closest to the touch area is controlled to shoot the touch area, and a shot image is obtained;
and performing content identification on the shot image.
2. The content recognition method according to claim 1, wherein after the content recognition of the captured image, the method further comprises:
outputting the identification content of the shot image to a screen of the intelligent host for display;
controlling the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
3. The content recognition method according to claim 2, wherein after controlling the smart host to rotate to face the screen toward the wearer of the wearable device so as to facilitate the wearer to view the recognition content of the captured image, the method further comprises:
acquiring an operation instruction sent by the wearer;
if the operation instruction indicates that the identification content is read, extracting text content in the identification content;
and acquiring click-to-read content selected by the wearer from the text content;
and broadcasting the point reading content.
4. The content recognition method of claim 3, further comprising:
if the operation instruction indicates that the topic searching is carried out on the identification content, extracting the image-text content in the identification content;
searching question and answer information matched with the image-text content from a networked question bank;
and outputting the question answering information to learning equipment connected with the wearable equipment for display.
5. The method of claim 4, wherein after extracting the teletext content of the identified content and before searching a networked question bank for question and answer information matching the teletext content, the method further comprises:
identifying the semantics of the image-text content, and judging whether the image-text content contains complete title information according to the semantics; if not, adjusting the shooting visual angle of any shooting module of the intelligent host to contain the complete question information, and executing the step of searching question and answer information matched with the image-text content from a networked question bank.
6. A wearable device comprising a smart host and a host stand, wherein the smart host is capable of any angle of rotation within a 360 ° range when standing up perpendicular to the host stand, the smart host comprising:
the touch detection unit is used for detecting whether a touch area of a user is positioned at the edge of a shooting visual angle of any shooting module of the intelligent host when the intelligent host is erected to be vertical to the host bracket;
the first control unit is used for controlling the intelligent host to rotate to a position that a shooting central point of any shooting module of the intelligent host is closest to the touch area when the intelligent host is erected to be vertical to the host bracket when the touch detection unit detects that the touch area of a user is located at the edge of a shooting visual angle of any shooting module of the intelligent host;
the second control unit is used for controlling any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image;
and the content identification unit is used for carrying out content identification on the shot image.
7. The wearable device of claim 6, wherein the smart host further comprises:
the first output unit is used for outputting the identification content of the shot image to the screen of the intelligent host for display after the content identification unit identifies the content of the shot image;
and the third control unit is used for controlling the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
8. The wearable device of claim 7, wherein the smart host further comprises:
the first obtaining unit is used for obtaining an operation instruction sent by the wearer after the third control unit controls the intelligent host to rotate to the face of the wearer with the screen facing the wearable device so that the wearer can conveniently view the identification content of the shot image;
a first extracting unit, configured to extract text content in the identification content when the operation instruction sent by the wearer and acquired by the first acquiring unit indicates to perform point reading on the identification content;
a second acquisition unit, configured to acquire click-to-read content selected by the wearer from the text content;
and the broadcasting unit is used for broadcasting the point-reading content.
9. The wearable device of claim 8, wherein the smart host further comprises:
the second extraction unit is used for extracting the image-text content in the identification content when the operation instruction sent by the wearer and acquired by the first acquisition unit indicates to search the identification content;
the network searching unit is used for searching question and answer information matched with the image-text content from a network question bank;
and the second output unit is used for outputting the question answering information to the learning equipment connected with the wearable equipment for display.
10. The wearable device of claim 9, wherein the smart host further comprises:
the semantic recognition unit is used for recognizing the semantics of the image-text content and judging whether the image-text content contains complete question information according to the semantics before the networked searching unit searches question and answer information matched with the image-text content from a networked question bank after the second extraction unit extracts the image-text content in the recognized content;
and the adjusting unit is used for adjusting the shooting visual angle of any shooting module of the intelligent host to contain the complete topic information when the semantic identification unit judges that the image-text content does not contain the complete topic information, and triggering the networking searching unit to search and contain the image-text content matched topic question-answer information of the complete topic information from the networking topic library.
11. A wearable device comprising a smart host and a host stand, wherein the smart host is capable of any angle of rotation within a 360 ° range when standing up perpendicular to the host stand, the smart host comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the wearable device-based content identification method according to any one of claims 1 to 5.
12. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the wearable-device-based content identification method according to any one of claims 1 to 5.
CN201911088886.9A 2019-11-08 2019-11-08 Content identification method based on wearable device and wearable device Active CN111182202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911088886.9A CN111182202B (en) 2019-11-08 2019-11-08 Content identification method based on wearable device and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911088886.9A CN111182202B (en) 2019-11-08 2019-11-08 Content identification method based on wearable device and wearable device

Publications (2)

Publication Number Publication Date
CN111182202A true CN111182202A (en) 2020-05-19
CN111182202B CN111182202B (en) 2022-05-27

Family

ID=70651882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911088886.9A Active CN111182202B (en) 2019-11-08 2019-11-08 Content identification method based on wearable device and wearable device

Country Status (1)

Country Link
CN (1) CN111182202B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783883A (en) * 2009-12-26 2010-07-21 华为终端有限公司 Adjusting method in co-optical-center videography and co-optical-center camera system
CN103581532A (en) * 2012-07-24 2014-02-12 合硕科技股份有限公司 Method and device for controlling lens signal photographing with handheld device
CN104217197A (en) * 2014-08-27 2014-12-17 华南理工大学 Touch reading method and device based on visual gestures
CN104793887A (en) * 2015-04-29 2015-07-22 广东欧珀移动通信有限公司 Double camera control method and device of mobile terminal
CN104822029A (en) * 2015-05-22 2015-08-05 广东欧珀移动通信有限公司 Method and device for controlling rotary camera to rotate and mobile terminal
CN104954672A (en) * 2015-06-10 2015-09-30 惠州Tcl移动通信有限公司 Mobile terminal and manual focusing method thereof
CN105120162A (en) * 2015-08-27 2015-12-02 广东欧珀移动通信有限公司 Camera rotation control method and terminal
CN105611161A (en) * 2015-12-24 2016-05-25 广东欧珀移动通信有限公司 Photographing control method, photographing control device and photographing system
CN105791675A (en) * 2016-02-26 2016-07-20 广东欧珀移动通信有限公司 Terminal, imaging and interaction control method and device, and terminal and system thereof
EP3103385A1 (en) * 2015-06-12 2016-12-14 Hill-Rom Services, Inc. Image transmission or recording triggered by bed event
CN106485758A (en) * 2016-10-31 2017-03-08 成都通甲优博科技有限责任公司 Implementation method demarcated by a kind of unmanned plane camera calibration device, scaling method and streamline
CN110174924A (en) * 2018-09-30 2019-08-27 广东小天才科技有限公司 A kind of making friends method and wearable device based on wearable device
CN110177242A (en) * 2019-04-08 2019-08-27 广东小天才科技有限公司 A kind of video call method and wearable device based on wearable device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783883A (en) * 2009-12-26 2010-07-21 华为终端有限公司 Adjusting method in co-optical-center videography and co-optical-center camera system
CN103581532A (en) * 2012-07-24 2014-02-12 合硕科技股份有限公司 Method and device for controlling lens signal photographing with handheld device
CN104217197A (en) * 2014-08-27 2014-12-17 华南理工大学 Touch reading method and device based on visual gestures
CN104793887A (en) * 2015-04-29 2015-07-22 广东欧珀移动通信有限公司 Double camera control method and device of mobile terminal
CN104822029A (en) * 2015-05-22 2015-08-05 广东欧珀移动通信有限公司 Method and device for controlling rotary camera to rotate and mobile terminal
CN104954672A (en) * 2015-06-10 2015-09-30 惠州Tcl移动通信有限公司 Mobile terminal and manual focusing method thereof
EP3103385A1 (en) * 2015-06-12 2016-12-14 Hill-Rom Services, Inc. Image transmission or recording triggered by bed event
CN105120162A (en) * 2015-08-27 2015-12-02 广东欧珀移动通信有限公司 Camera rotation control method and terminal
CN105611161A (en) * 2015-12-24 2016-05-25 广东欧珀移动通信有限公司 Photographing control method, photographing control device and photographing system
CN105791675A (en) * 2016-02-26 2016-07-20 广东欧珀移动通信有限公司 Terminal, imaging and interaction control method and device, and terminal and system thereof
CN106485758A (en) * 2016-10-31 2017-03-08 成都通甲优博科技有限责任公司 Implementation method demarcated by a kind of unmanned plane camera calibration device, scaling method and streamline
CN110174924A (en) * 2018-09-30 2019-08-27 广东小天才科技有限公司 A kind of making friends method and wearable device based on wearable device
CN110177242A (en) * 2019-04-08 2019-08-27 广东小天才科技有限公司 A kind of video call method and wearable device based on wearable device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李志杰: "图像清晰度比拼:面阵扫描PK线阵扫描(二)", 《数字印刷》 *

Also Published As

Publication number Publication date
CN111182202B (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN110225387A (en) A kind of information search method, device and electronic equipment
KR101696555B1 (en) Text location search system in image information or geographic information using voice recognition function and method thereof
CN109410664B (en) Pronunciation correction method and electronic equipment
CN107451127B (en) Word translation method and system based on image and mobile device
CN109597943B (en) Learning content recommendation method based on scene and learning equipment
CN104796584A (en) Prompt device with voice recognition function
CN110992783A (en) Sign language translation method and translation equipment based on machine learning
JP2011002656A (en) Device for detection of voice recognition result correction candidate, voice transcribing support device, method, and program
US20080094496A1 (en) Mobile communication terminal
CN111415537A (en) Symbol-labeling-based word listening system for primary and secondary school students
CN111951629A (en) Pronunciation correction system, method, medium and computing device
CN105100647A (en) Subtitle correction method and terminal
CN111081080A (en) Voice detection method and learning device
CN111680177A (en) Data searching method, electronic device and computer-readable storage medium
CN111739534A (en) Processing method and device for assisting speech recognition, electronic equipment and storage medium
CN111156441A (en) Desk lamp, system and method for assisting learning
CN111182202B (en) Content identification method based on wearable device and wearable device
US8035744B2 (en) Television receiver and method of receiving television broadcasting
CN112163513A (en) Information selection method, system, device, electronic equipment and storage medium
CN111079726B (en) Image processing method and electronic equipment
US20200304708A1 (en) Method and apparatus for acquiring an image
CN111027353A (en) Search content extraction method and electronic equipment
CN111553356B (en) Character recognition method and device, learning device and computer readable storage medium
CN110795918A (en) Method, device and equipment for determining reading position
CN111078080B (en) Point reading control method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant