CN112817445A - Information acquisition method and device, electronic equipment and storage medium - Google Patents

Information acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112817445A
CN112817445A CN202110094826.9A CN202110094826A CN112817445A CN 112817445 A CN112817445 A CN 112817445A CN 202110094826 A CN202110094826 A CN 202110094826A CN 112817445 A CN112817445 A CN 112817445A
Authority
CN
China
Prior art keywords
information
acquisition
determining
target
fingertip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110094826.9A
Other languages
Chinese (zh)
Inventor
王青
王宇翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DMAI Guangzhou Co Ltd
Original Assignee
DMAI Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DMAI Guangzhou Co Ltd filed Critical DMAI Guangzhou Co Ltd
Priority to CN202110094826.9A priority Critical patent/CN112817445A/en
Publication of CN112817445A publication Critical patent/CN112817445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an information acquisition method, an information acquisition device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a monitoring image of a to-be-detected area; extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track; determining a target acquisition area according to the gesture outline information; and acquiring the image content in the target acquisition area, and acquiring corresponding target information according to the image content. According to the method provided by the scheme, the target acquisition area can be determined according to the gesture outline information obtained from the monitoring image, so that the target information is obtained, the workload of operators is reduced, and a foundation is laid for improving the application efficiency of the AR technology.

Description

Information acquisition method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of AR technologies, and in particular, to an information acquisition method and apparatus, an electronic device, and a storage medium.
Background
With the increasing popularization of AR technology, AR technology has long been applied to the fields of distance education and the like. For example, AR technology allows teachers to attend classes to remote end students using teaching aids such as books at their sides.
In the prior art, the AR technology generally requires a user to hold or wear a hardware device, collect target content by using the hardware device, and then generate a corresponding AR image according to the collected target content.
However, in the aspect of collecting target content, certain requirements are imposed on the technical level of operators, and the operation process is complicated, which is not beneficial to ensuring the application efficiency of the AR technology. Therefore, an information acquisition method capable of simplifying the acquisition process of target content of an operator such as a teacher is urgently needed, and the method has great significance for improving the application efficiency of the AR technology.
Disclosure of Invention
The application provides an information acquisition method, an information acquisition device, electronic equipment and a storage medium, and aims to overcome the defects that the operation process is complicated and the like in the prior art.
A first aspect of the present application provides an information acquisition method, including:
acquiring a monitoring image of a to-be-detected area;
extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track;
determining a target acquisition area according to the gesture outline information;
and acquiring the image content in the target acquisition area, and acquiring corresponding target information according to the image content.
Optionally, the determining a target collection area according to the gesture contour information includes:
determining a first acquisition area according to the pointing information and the fingertip coordinates; wherein the first acquisition region comprises a plurality of acquisition sub-regions;
and determining the target acquisition area according to the distance between each acquisition sub-area in the acquisition area and the fingertip coordinate.
Optionally, the determining a target collection area according to the gesture contour information includes:
judging whether the fingertip movement track is a closed track or not according to the starting point coordinate, the end point coordinate and the path length of the fingertip movement track;
and when the fingertip movement track is determined to be a closed track, determining a target acquisition area according to the closed range of the fingertip movement track.
Optionally, the method further includes:
when the fingertip movement track is determined to be an unclosed track, determining a second acquisition area according to the pointing information and the fingertip movement track; wherein the second acquisition region comprises a plurality of acquisition sub-regions;
and determining the target acquisition area according to the distance between each acquisition sub-area in the second acquisition area and the fingertip movement track.
Optionally, the determining whether the fingertip movement track is a closed track according to the start point coordinate, the end point coordinate and the path length of the fingertip movement track includes:
determining the ratio of the distance between the starting point coordinate and the end point coordinate to the path length of the fingertip movement track according to the starting point coordinate, the end point coordinate and the path length of the fingertip movement track;
judging whether the ratio is smaller than a preset ratio threshold value or not;
and when the ratio is smaller than a preset ratio threshold value, determining that the fingertip movement track is a closed track.
Optionally, the method further includes:
and generating a corresponding AR image according to the target information.
Optionally, the acquiring the image content in the target acquisition area includes:
extracting image features in the target acquisition region;
and determining the image content according to the image characteristics.
A second aspect of the present application provides an information acquisition apparatus, comprising:
the acquisition module is used for acquiring a monitoring image of a to-be-detected area;
the extraction module is used for extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track;
the determining module is used for determining a target acquisition area according to the gesture outline information;
and the acquisition module is used for acquiring the image content in the target acquisition area and acquiring corresponding target information according to the image content.
Optionally, the determining module is specifically configured to:
determining a first acquisition area according to the pointing information and the fingertip coordinates; wherein the first acquisition region comprises a plurality of acquisition sub-regions;
and determining the target acquisition area according to the distance between each acquisition sub-area in the acquisition area and the fingertip coordinate.
Optionally, the determining module is specifically configured to:
judging whether the fingertip movement track is a closed track or not according to the starting point coordinate, the end point coordinate and the path length of the fingertip movement track;
and when the fingertip movement track is determined to be a closed track, determining a target acquisition area according to the closed range of the fingertip movement track.
Optionally, the determining module is specifically configured to:
when the fingertip movement track is determined to be an unclosed track, determining a second acquisition area according to the pointing information and the fingertip movement track; wherein the second acquisition region comprises a plurality of acquisition sub-regions;
and determining the target acquisition area according to the distance between each acquisition sub-area in the second acquisition area and the fingertip movement track.
Optionally, the determining module is specifically configured to:
determining the ratio of the distance between the starting point coordinate and the end point coordinate to the path length of the fingertip movement track according to the starting point coordinate, the end point coordinate and the path length of the fingertip movement track;
judging whether the ratio is smaller than a preset ratio threshold value or not;
and when the ratio is smaller than a preset ratio threshold value, determining that the fingertip movement track is a closed track.
Optionally, the acquisition module is further configured to:
and generating a corresponding AR image according to the target information.
Optionally, the acquisition module is specifically configured to:
extracting image features in the target acquisition region;
and determining the image content according to the image characteristics.
A third aspect of the present application provides an electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the method as set forth in the first aspect above and in various possible designs of the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a method as set forth in the first aspect and various possible designs of the first aspect.
This application technical scheme has following advantage:
according to the information acquisition method, the information acquisition device, the electronic equipment and the storage medium, the monitoring image of the area to be detected is obtained; extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track; determining a target acquisition area according to the gesture outline information; and acquiring the image content in the target acquisition area, and acquiring corresponding target information according to the image content. According to the method provided by the scheme, the target acquisition area can be determined according to the gesture outline information obtained from the monitoring image, so that the target information is obtained, the workload of operators is reduced, and a foundation is laid for improving the application efficiency of the AR technology.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art according to these drawings.
Fig. 1 is a schematic structural diagram of an information acquisition system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an information acquisition method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an information acquisition device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. In the description of the following examples, "plurality" means two or more unless specifically limited otherwise.
In the prior art, the AR technology generally requires a user to hold or wear a hardware device, collect target content by using the hardware device, and then generate a corresponding AR image according to the collected target content. However, in the aspect of collecting target content, certain requirements are imposed on the technical level of operators, and the operation process is complicated, which is not beneficial to ensuring the application efficiency of the AR technology.
In order to solve the above problems, in the information acquisition method, the information acquisition device, the electronic device, and the storage medium provided by the embodiment of the application, the monitoring image of the area to be detected is acquired; extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track; determining a target acquisition area according to the gesture outline information; and acquiring the image content in the target acquisition area, and acquiring corresponding target information according to the image content. According to the method provided by the scheme, the target acquisition area can be determined according to the gesture outline information obtained from the monitoring image, so that the target information is obtained, the workload of operators is reduced, and a foundation is laid for improving the application efficiency of the AR technology.
The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
First, a structure of an information acquisition system based on the present application will be explained:
the information acquisition method, the information acquisition device, the electronic equipment and the storage medium are suitable for acquiring target information so as to generate corresponding AR images. As shown in fig. 1, the schematic structural diagram of an information acquisition system according to an embodiment of the present application mainly includes a monitoring device and an information acquisition device for acquiring information. Specifically, a monitoring device is used for monitoring an area to be subjected to information acquisition, an obtained monitoring image is sent to an information acquisition device, and the information acquisition device acquires target information according to the monitoring image.
The embodiment of the application provides an information acquisition method, which is used for acquiring target information so as to generate a corresponding AR image. The execution subject of the embodiment of the present application is an electronic device, such as a server, a desktop computer, a notebook computer, a tablet computer, and other electronic devices that can be used for information collection.
As shown in fig. 2, a schematic flow chart of an information acquisition method provided in the embodiment of the present application is shown, where the method includes:
step 201, acquiring a monitoring image of a region to be detected.
It should be explained that the information collecting method provided by the embodiment of the present application may specifically collect image information on planes such as drawings and books on a desktop.
For example, when the information acquisition method provided by the embodiment of the present application is applied to the field of remote education, a preset monitoring device, such as a camera, may be used to monitor a desktop area (to-be-detected area) of a teacher, so as to obtain a monitoring image of the to-be-detected area.
Step 202, extracting gesture outline information from the monitored image.
The gesture outline information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track.
It should be explained that, in the process of explaining the contents on the desktop by an operator such as a teacher, the fingers point to the area being explained. Therefore, in order to improve the information acquisition efficiency, the gesture contour information in the monitoring image can be extracted, and the specific extraction mode can refer to the contour feature extraction technology and the like in the prior art, which is not limited in the embodiment of the present application.
The pointing information may be specifically determined according to a preset information acquisition rule, for example, the direction of the index finger tip is determined as the pointing direction, and the corresponding fingertip coordinate is the coordinate corresponding to the index finger tip. Because the information acquisition method provided by the embodiment of the application is mainly used for acquiring information on a desktop isoplane, in order to relieve the calculation load of the execution equipment, a two-dimensional coordinate system can be established according to an application scene, and the coordinates used by the embodiment of the application also adopt two-dimensional coordinates.
And step 203, determining a target acquisition area according to the gesture outline information.
Specifically, before information acquisition, the image on the desktop may be subjected to region division according to a preset region division rule.
Furthermore, according to the currently obtained gesture outline information, corresponding target acquisition areas are determined in a plurality of areas divided by the current desktop.
And 204, acquiring image content in the target acquisition area, and obtaining corresponding target information according to the image content.
It should be explained that the image content specifically refers to a content type, a content semantic, a content color, and the like corresponding to the target capture area. The target information is specifically information for generating an AR video, and the corresponding target information may be obtained by performing data processing such as recognition, integration, noise reduction, and the like on image content.
Specifically, in one embodiment, in order to obtain the image content, image features in the target acquisition area may be extracted; and determining the image content according to the image characteristics.
The specific image feature extraction method is not limited in the embodiment of the present application.
Further, in an embodiment, after the target information is obtained, a corresponding AR image may be generated according to the target information.
Illustratively, a Chinese phrase contained in the image content is identified, if the image content is determined to have two attributes of Chinese and phrase, target information containing Chinese pinyin, phrase paraphrase, phrase making sentence and single character stroke order of the phrase is generated, an AR image is further generated and displayed on a display interface, and therefore the effects of information display and man-machine interaction are achieved.
On the basis of the foregoing embodiments, in order to improve the information acquisition efficiency, as an implementable manner, in an embodiment, the determining the target acquisition area according to the gesture contour information includes:
step 2031, determining a first acquisition area according to the pointing information and the fingertip coordinates; wherein the first acquisition region comprises a plurality of acquisition sub-regions;
step 2032, determining a target collection area according to the distance between each collection subarea in the collection area and the fingertip coordinates.
For example, if the pointing information is upward, an area above the fingertip coordinate is determined as the first acquisition area.
Specifically, in order to calculate the distance between the regions and the points, a measurement point may be set on each acquisition sub-region according to a preset calculation rule, and the measurement point may be a center point of gravity or a center point of a lower boundary of the entire region. Further, the distance between the measurement point of each acquisition sub-region and the fingertip coordinate is calculated, and the obtained calculation result is recorded as the distance between each acquisition sub-region and the fingertip coordinate. And finally, determining the acquisition sub-region with the shortest distance as a target acquisition region.
Similarly, in an embodiment, whether the fingertip movement track is a closed track or not can be judged according to the starting point coordinate, the end point coordinate and the path length of the fingertip movement track; and when the fingertip movement track is determined to be a closed track, determining a target acquisition area according to the closed range of the fingertip movement track.
For example, if the current operator circles a region on the desktop with the index finger, the region is determined as the target acquisition region.
Specifically, in an embodiment, since the operator performs the circling interpretation on the desktop content, there is no way to ensure that the coordinates of the starting point of the fingertip movement track are exactly the coordinates of the ending point of the whole track. Therefore, in order to solve the error problem, the ratio between the distance between the start point coordinate and the end point coordinate and the path length of the fingertip movement track can be determined according to the start point coordinate, the end point coordinate and the path length of the fingertip movement track to improve the reliability of the judgment result; judging whether the ratio is smaller than a preset ratio threshold value or not; and when the ratio is smaller than a preset ratio threshold value, determining that the fingertip movement track is a closed track.
The proportional threshold may be set according to an actual situation, and the embodiment of the present application is not limited.
Specifically, when the distance between the start point coordinate and the end point coordinate is much smaller than the path length of the entire fingertip movement track, that is, the distance between the start point coordinate and the end point coordinate is small, and the path length of the entire fingertip movement track is relatively long, it can be determined that the fingertip movement track is a closed track.
Conversely, when the ratio of the distance between the start point coordinate and the end point coordinate to the path length of the fingertip movement track is not less than the preset ratio threshold, determining that the fingertip movement track is an unclosed track.
Further, in an embodiment, when the fingertip movement track is determined to be the non-closed track, the second acquisition area is determined according to the pointing information and the fingertip movement track; wherein the second acquisition region comprises a plurality of acquisition sub-regions; and determining a target acquisition area according to the distance between each acquisition sub-area in the second acquisition area and the fingertip movement track.
For example, if the current fingertip movement track is a smooth line segment and the pointing information is upward, the region above the smooth line segment is determined as the second acquisition region. For example, a ray facing the same direction as the fingertip can be extracted by using the fingertip as a dot, and during the completion of the gesture motion (fingertip movement trajectory), the area swept by the ray is the second acquisition area.
Specifically, the distance between each acquisition sub-region and the perpendicular line of the fingertip movement track may be determined as the distance between each acquisition sub-region and the fingertip movement track, and the acquisition sub-region with the shortest distance may be further determined as the target acquisition region.
It should be explained that the target collection area provided by the embodiment of the present application may be a word, a text of a natural segment, a complete mathematical formula, or a picture. In the area division process, the area division condition can be adjusted according to the gesture information. For example, if the target acquisition region is determined according to the fingertip coordinates, the division unit of the region may be smaller, such as a word; if the target acquisition area is determined according to the fingertip movement track at present, the division unit of the area can be larger, such as a line of characters, so as to be close to the operation habit of an operator.
It should be further explained that, in order to further conform to the operation habit of the operator, if the fingertip stays at a certain point for more than 1 second, the position coordinates of the point are determined as the fingertip coordinates. If the fingertip moves to start the stroke after staying at a certain position for more than 500 milliseconds (starting point coordinate), and then the fingertip moves to stop for more than 1 second, the stroke is ended (end point coordinate). In the case where the execution device recognition accuracy is insufficient, the fingertip movement locus is not recognized, and the connecting line between the start point coordinates and the end point coordinates may be determined as the fingertip movement locus.
According to the information acquisition method provided by the embodiment of the application, the monitoring image of the area to be detected is obtained; extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track; determining a target acquisition area according to the gesture outline information; and acquiring image content in the target acquisition area, and acquiring corresponding target information according to the image content. According to the method provided by the scheme, the target acquisition area can be determined according to the gesture outline information obtained from the monitoring image, so that the target information is obtained, the workload of operators is reduced, and a foundation is laid for improving the application efficiency of the AR technology. In addition, even if the acquisition area is irregular in shape and has more similar interference information, the target acquisition area can be accurately positioned, target information is obtained, and the information acquisition efficiency is further improved. In addition, under the indoor desktop environment, the attention of people usually follows the hands, the hands move more conveniently and naturally, and the user experience is improved.
The embodiment of the application provides an information acquisition device, which is used for executing the information acquisition method provided by the embodiment.
As shown in fig. 3, a schematic structural diagram of an information acquisition device provided in the embodiment of the present application is shown. The information acquisition apparatus 30 includes an acquisition module 301, an extraction module 302, a determination module 303, and an acquisition module 304.
The acquiring module 301 is configured to acquire a monitoring image of an area to be detected; an extracting module 302, configured to extract gesture contour information from the monitored image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track; the determining module 303 is configured to determine a target acquisition area according to the gesture contour information; the acquisition module 304 is configured to acquire image content in the target acquisition area and obtain corresponding target information according to the image content.
The specific manner in which each module performs the operation has been described in detail in the embodiment of the method, and will not be described in detail here.
The information acquisition device provided by the embodiment of the application is used for executing the information acquisition method provided by the embodiment, the implementation mode and the principle are the same, and the description is omitted.
The embodiment of the application provides electronic equipment which is used for executing the information acquisition method provided by the embodiment.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 40 includes: at least one processor 41 and memory 42;
the memory stores computer-executable instructions; the at least one processor executes the computer-executable instructions stored by the memory, so that the at least one processor performs the information acquisition method provided by the above embodiments.
The electronic device provided by the embodiment of the application is used for executing the information acquisition method provided by the embodiment, and the implementation manner and the principle of the electronic device are the same and are not repeated.
An embodiment of the present application provides a computer-readable storage medium, where a computer execution instruction is stored in the computer-readable storage medium, and when a processor executes the computer execution instruction, the information acquisition method provided in any of the above embodiments is implemented.
The storage medium including the computer-executable instructions of the embodiments of the present application may be used to store the computer-executable instructions of the information acquisition method provided in the foregoing embodiments, and the implementation manner and the principle thereof are the same and are not described again.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. An information acquisition method, comprising:
acquiring a monitoring image of a to-be-detected area;
extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track;
determining a target acquisition area according to the gesture outline information;
and acquiring the image content in the target acquisition area, and acquiring corresponding target information according to the image content.
2. The method of claim 1, wherein determining a target acquisition area based on the gesture profile information comprises:
determining a first acquisition area according to the pointing information and the fingertip coordinates; wherein the first acquisition region comprises a plurality of acquisition sub-regions;
and determining the target acquisition area according to the distance between each acquisition sub-area in the acquisition area and the fingertip coordinate.
3. The method of claim 1, wherein determining a target acquisition area based on the gesture profile information comprises:
judging whether the fingertip movement track is a closed track or not according to the starting point coordinate, the end point coordinate and the path length of the fingertip movement track;
and when the fingertip movement track is determined to be a closed track, determining a target acquisition area according to the closed range of the fingertip movement track.
4. The method of claim 3, further comprising:
when the fingertip movement track is determined to be an unclosed track, determining a second acquisition area according to the pointing information and the fingertip movement track; wherein the second acquisition region comprises a plurality of acquisition sub-regions;
and determining the target acquisition area according to the distance between each acquisition sub-area in the second acquisition area and the fingertip movement track.
5. The method according to claim 3, wherein said determining whether the fingertip movement track is a closed track according to the start point coordinate, the end point coordinate and the path length of the fingertip movement track comprises:
determining the ratio of the distance between the starting point coordinate and the end point coordinate to the path length of the fingertip movement track according to the starting point coordinate, the end point coordinate and the path length of the fingertip movement track;
judging whether the ratio is smaller than a preset ratio threshold value or not;
and when the ratio is smaller than a preset ratio threshold value, determining that the fingertip movement track is a closed track.
6. The method of claim 1, further comprising:
and generating a corresponding AR image according to the target information.
7. The method of claim 1, wherein said capturing image content within said target capture area comprises:
extracting image features in the target acquisition region;
and determining the image content according to the image characteristics.
8. An information acquisition apparatus, comprising:
the acquisition module is used for acquiring a monitoring image of a to-be-detected area;
the extraction module is used for extracting gesture outline information from the monitoring image; the gesture contour information comprises pointing information and fingertip coordinates, or pointing information and a fingertip movement track;
the determining module is used for determining a target acquisition area according to the gesture outline information;
and the acquisition module is used for acquiring the image content in the target acquisition area and acquiring corresponding target information according to the image content.
9. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method of any of claims 1-7.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1 to 7.
CN202110094826.9A 2021-01-25 2021-01-25 Information acquisition method and device, electronic equipment and storage medium Pending CN112817445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110094826.9A CN112817445A (en) 2021-01-25 2021-01-25 Information acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110094826.9A CN112817445A (en) 2021-01-25 2021-01-25 Information acquisition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112817445A true CN112817445A (en) 2021-05-18

Family

ID=75859454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110094826.9A Pending CN112817445A (en) 2021-01-25 2021-01-25 Information acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112817445A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080231608A1 (en) * 2007-03-23 2008-09-25 Denso Corporation Operating input device for reducing input error
US20110057953A1 (en) * 2009-09-07 2011-03-10 Horodezky Samuel J User interface methods for ending an application
US20130002720A1 (en) * 2011-06-28 2013-01-03 Chi Mei Communication Systems, Inc. System and method for magnifying a webpage in an electronic device
CN103150091A (en) * 2013-03-04 2013-06-12 苏州佳世达电通有限公司 Input method of electronic device
CN103576848A (en) * 2012-08-09 2014-02-12 腾讯科技(深圳)有限公司 Gesture operation method and gesture operation device
US20140066017A1 (en) * 2012-09-03 2014-03-06 Samsung Electronics Co., Ltd. Method of unlocking mobile terminal, and the mobile terminal
CN103941866A (en) * 2014-04-08 2014-07-23 河海大学常州校区 Three-dimensional gesture recognizing method based on Kinect depth image
US20150062000A1 (en) * 2013-08-29 2015-03-05 Seiko Epson Corporation Head mounted display apparatus
US20160091976A1 (en) * 2014-09-30 2016-03-31 Xerox Corporation Dynamic hand-gesture-based region of interest localization
US20170153713A1 (en) * 2015-11-30 2017-06-01 Fujitsu Limited Head mounted display device and control method
CN111078083A (en) * 2019-06-09 2020-04-28 广东小天才科技有限公司 Method for determining click-to-read content and electronic equipment
CN111353501A (en) * 2020-02-25 2020-06-30 暗物智能科技(广州)有限公司 Book point-reading method and system based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080231608A1 (en) * 2007-03-23 2008-09-25 Denso Corporation Operating input device for reducing input error
US20110057953A1 (en) * 2009-09-07 2011-03-10 Horodezky Samuel J User interface methods for ending an application
US20130002720A1 (en) * 2011-06-28 2013-01-03 Chi Mei Communication Systems, Inc. System and method for magnifying a webpage in an electronic device
CN103576848A (en) * 2012-08-09 2014-02-12 腾讯科技(深圳)有限公司 Gesture operation method and gesture operation device
US20140066017A1 (en) * 2012-09-03 2014-03-06 Samsung Electronics Co., Ltd. Method of unlocking mobile terminal, and the mobile terminal
CN103150091A (en) * 2013-03-04 2013-06-12 苏州佳世达电通有限公司 Input method of electronic device
US20150062000A1 (en) * 2013-08-29 2015-03-05 Seiko Epson Corporation Head mounted display apparatus
CN103941866A (en) * 2014-04-08 2014-07-23 河海大学常州校区 Three-dimensional gesture recognizing method based on Kinect depth image
US20160091976A1 (en) * 2014-09-30 2016-03-31 Xerox Corporation Dynamic hand-gesture-based region of interest localization
US20170153713A1 (en) * 2015-11-30 2017-06-01 Fujitsu Limited Head mounted display device and control method
CN111078083A (en) * 2019-06-09 2020-04-28 广东小天才科技有限公司 Method for determining click-to-read content and electronic equipment
CN111353501A (en) * 2020-02-25 2020-06-30 暗物智能科技(广州)有限公司 Book point-reading method and system based on deep learning

Similar Documents

Publication Publication Date Title
US8970696B2 (en) Hand and indicating-point positioning method and hand gesture determining method used in human-computer interaction system
KR101825154B1 (en) Overlapped handwriting input method
CN111680594B (en) Gesture recognition-based augmented reality interaction method
US8373654B2 (en) Image based motion gesture recognition method and system thereof
CN109961008A (en) Form analysis method, medium and computer equipment based on text location identification
US9122353B2 (en) Kind of multi-touch input device
Yin et al. Toward natural interaction in the real world: Real-time gesture recognition
JP2021524951A (en) Methods, devices, devices and computer readable storage media for identifying aerial handwriting
CN103336576A (en) Method and device for operating browser based on eye-movement tracking
CN111507330B (en) Problem recognition method and device, electronic equipment and storage medium
CN109191939B (en) Three-dimensional projection interaction method based on intelligent equipment and intelligent equipment
CN101869484A (en) Medical diagnosis device having touch screen and control method thereof
US11681409B2 (en) Systems and methods for augmented or mixed reality writing
CN113516113A (en) Image content identification method, device, equipment and storage medium
CN110990238B (en) Non-invasive visual test script automatic recording method based on video shooting
CN109613979B (en) Character input method and device, AR equipment and computer storage medium
CN112199015B (en) Intelligent interaction all-in-one machine and writing method and device thereof
CN111722711B (en) Augmented reality scene output method, electronic device and computer readable storage medium
CN104951083A (en) Remote gesture input method and input system
KR20190027287A (en) The method of mimesis for keyboard and mouse function using finger movement and mouth shape
CN106547339B (en) Control method and device of computer equipment
CN112817445A (en) Information acquisition method and device, electronic equipment and storage medium
KR101360322B1 (en) Apparatus and method for controlling electric boards using multiple hand shape detection and tracking
Fujiwara et al. Interactions with a line-follower: An interactive tabletop system with a markerless gesture interface for robot control
Pan et al. Research on functional test of mobile app based on robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination