CN111432131B - Photographing frame selection method and device, electronic equipment and storage medium - Google Patents

Photographing frame selection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111432131B
CN111432131B CN202010368339.2A CN202010368339A CN111432131B CN 111432131 B CN111432131 B CN 111432131B CN 202010368339 A CN202010368339 A CN 202010368339A CN 111432131 B CN111432131 B CN 111432131B
Authority
CN
China
Prior art keywords
carrier
coordinates
position coordinates
frame selection
operating body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010368339.2A
Other languages
Chinese (zh)
Other versions
CN111432131A (en
Inventor
乔慧丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010368339.2A priority Critical patent/CN111432131B/en
Publication of CN111432131A publication Critical patent/CN111432131A/en
Application granted granted Critical
Publication of CN111432131B publication Critical patent/CN111432131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明实施例涉及智能设备技术领域,公开了一种拍照框选的方法、装置、电子设备及存储介质。该方法包括:当接收到框选指令时,利用摄像头获取操作体在承载体上的位置坐标以及操作者的眼球在所述承载体上的注视区域;在所述注视区域内存在且仅存在一个所述位置坐标时,执行第一操作;所述第一操作为基于所述位置坐标以及预设规则对承载体上的内容进行拍照。实施本发明实施例,通过操作体定位和眼球跟踪技术相结合的方式实现框选承载体上的内容,可摒弃多操作体造成无法识别具体框选内容,提高框题准确率,提升框题效率。

Figure 202010368339

Embodiments of the present invention relate to the technical field of smart devices, and disclose a method, an apparatus, an electronic device and a storage medium for frame selection of photos. The method includes: when a frame selection instruction is received, using a camera to obtain the position coordinates of the operating body on the carrier and the gaze area of the operator's eyeballs on the carrier; and there is only one gaze area in the gaze area. When the position coordinates are obtained, a first operation is performed; the first operation is to photograph the content on the carrier based on the position coordinates and a preset rule. By implementing the embodiment of the present invention, the content on the frame selection carrier is realized by combining the positioning of the operator and the eye tracking technology, which can eliminate the inability to identify the specific frame selection content caused by multiple operators, improve the accuracy of frame questions, and improve the efficiency of frame questions .

Figure 202010368339

Description

Photographing frame selection method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent equipment, in particular to a method and a device for photo frame selection, electronic equipment and a storage medium.
Background
At present, intelligent equipment such as a learning machine and a home teaching machine has a block function, and the block mode is roughly divided into two categories, one category is that a camera arranged at the back of the intelligent equipment is used for shooting the content needing to be selected by a bearing body (such as a book); the other is mainly completed by a front camera of the intelligent device, the camera identifies the position of an operating body (such as a finger) on the carrier, and then the content on the carrier is photographed based on the position. The frame questions are mainly used for searching questions, or for storing knowledge points or wrong questions, and facilitating future warm-up and study.
The first type of framing problem mode has a complicated flow, depends on the photographing level of an operator, and cannot be subjected to subsequent operation due to over-fuzzy; the second type of block topic is simple to use, but also has the following problems: in practice, it is found that, because the carrier may be wrinkled or curled, an operator frequently and habitually presses one side of the carrier with one hand during the framing process, and uses the other hand as an operation body to position the content of the frame selection.
Disclosure of Invention
Aiming at the defects, the embodiment of the invention discloses a method and a device for photographing and framing, electronic equipment and a storage medium, which can improve the accuracy of framing questions.
The first aspect of the embodiments of the present invention discloses a method for photo frame selection, which includes:
when a frame selection instruction is received, acquiring position coordinates of an operating body on a bearing body and a watching area of eyeballs of an operator on the bearing body by using a camera;
performing a first operation when there is only one of the position coordinates within the gazing zone; the first operation is to photograph the content on the bearer based on the position coordinates and a preset rule.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the acquiring, by using a camera, position coordinates of the operation body on the carrier includes:
starting a first camera to obtain a preview image of a bearing body with an operation body;
determining image coordinates of an operation body in a preview image;
and determining the position coordinates of the operation body by using the image coordinates.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining image coordinates of the operation body in the preview image includes:
identifying the operation body by using the color difference between the operation body and the content on the carrier;
and determining the image coordinates of the operation body in the preview image.
As an optional implementation manner, in the first aspect of the embodiments of the present invention, the acquiring, by using a camera, a gaze region of an eyeball of an operator on the carrier body includes:
starting a second camera to obtain a face image of an operator;
determining the position of the pupil center in the face image and the offset of the position of the pupil center relative to a reference point;
determining a sight line direction of the eyeball and a fixation point on the carrier based on the offset;
and determining a gazing area by taking the gazing point as a center.
As an alternative implementation, in the first aspect of the embodiment of the present invention, the determining the position of the pupil center in the face image and the offset of the position of the pupil center with respect to the reference point includes:
inputting the facial image into a convolutional neural network trained in advance to determine the characteristic points of the pupil;
determining the position of the pupil center by using the characteristic points of the pupil;
and determining the offset of the pupil center according to the pupil center position and the reference point.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, determining the offset of the pupil center position according to the pupil center position and the reference point includes:
constructing the appearance of the eye according to the characteristic points of the pupil, and taking the position of the pupil center when looking straight as a reference point;
and calculating the offset of the pupil center position relative to the reference point according to the pupil center position and the reference point position.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
performing a second operation when the gazing area is not in the carrier, or the position coordinates are not detected, or no position coordinates or two or more position coordinates are present within the gazing area.
The second aspect of the embodiment of the invention discloses a photographing frame selection method, which comprises the following steps:
when a frame selection instruction is received, acquiring position coordinates of an operating body on a bearing body and a watching area of eyeballs of an operator on the bearing body by using a camera;
when a plurality of position coordinates exist in the gazing area, judging whether the position coordinates are continuous points or not;
if the position coordinates are continuous points, detecting the edge of the bearing body, and acquiring an included angle between the continuous points and any edge of the bearing body;
if the included angle between the continuous point and one edge of the bearing body is smaller than or equal to a preset included angle, executing a first operation; the first operation is to photograph the content on the bearer based on the continuous points and a preset rule.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the method further includes:
and when the gazing area is not on the bearing body, or the position coordinate is not detected, or the position coordinate does not exist in the gazing area, or a plurality of position coordinates exist in the gazing area, the position coordinates are not continuous points or are not completely continuous points, or a plurality of position coordinates exist in the gazing area, the position coordinates are continuous points, but included angles between the continuous points and all edges of the bearing body are larger than a preset included angle, executing a second operation.
A third aspect of the embodiments of the present invention discloses a photographing frame selection device, including:
the acquisition unit is used for acquiring the position coordinates of the operating body on the bearing body and the watching area of the eyeball of the operator on the bearing body by utilizing the camera when the frame selection instruction is received;
an execution unit, configured to execute a first operation when there is only one position coordinate in the gazing area; the first operation is to photograph the content on the bearer based on the position coordinates and a preset rule.
A fourth aspect of the embodiments of the present invention discloses an electronic device, including: a memory storing executable program code; a processor coupled with the memory; the processor calls the executable program code stored in the memory to execute the method for photographing and selecting the frame disclosed in the first aspect or the second aspect of the embodiment of the invention.
A fifth aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program enables a computer to execute a method for photo frame selection disclosed in the first aspect or the second aspect of the embodiments of the present invention.
A sixth aspect of the present embodiment discloses a computer program product, which when running on a computer, causes the computer to execute a method for photo frame selection disclosed in the first or second aspect of the present embodiment.
A seventh aspect of the present invention discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, and when the computer program product runs on a computer, the computer is enabled to execute a method for photo frame selection disclosed in the first aspect or the second aspect of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when a frame selection instruction is received, the position coordinates of an operating body on a bearing body and a watching area of an eyeball of an operator on the bearing body are obtained by using a camera; performing a first operation when there is only one of the position coordinates within the gazing zone; the first operation is to photograph the content on the bearer based on the position coordinates and a preset rule. Therefore, by implementing the embodiment of the invention, the content on the bearing body is selected by the frame in a mode of combining the operation body positioning and the eyeball tracking technology, the situation that the specific frame selection content cannot be identified due to multiple operation bodies can be abandoned, the frame accuracy rate is improved, and the frame efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a photo-frame selection method according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart of a process for obtaining position coordinates of an operating body on a carrier according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a process of acquiring a gaze area of an eyeball of an operator on a carrier according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating an operation of pointing a frame selection area with a finger according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating another photo frame selection method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an operation of pointing a frame area with a pencil according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an operation of pointing the frame selection area with two rulers according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a photo frame selection device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of another photographing frame selection device disclosed in the embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first", "second", "third", "fourth", and the like in the description and the claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a method and a device for selecting a photo frame, electronic equipment and a storage medium, wherein the method and the device achieve the purpose of framing a question by combining an operation body positioning technology and an eyeball tracking technology, can enable the framing question to be more accurate, and further improve the framing question efficiency, and are described in detail by combining with an attached drawing.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a photo frame selection method according to an embodiment of the present invention. The method provided by the embodiment of the invention is suitable for intelligent equipment such as a learning machine, a family education machine, a point reading machine, a tablet computer or a mobile phone with a front camera. The framing refers to selecting corresponding content on the carrier body according to a preset rule for photographing the positioning position of the operation body, and the framing is framing content. As shown in fig. 1, the photo frame selection method includes the following steps:
110. when a frame selection instruction is received, the position coordinates of the operating body on the bearing body and the watching area of the eyeball of the operator on the bearing body are acquired by the camera.
The frame questions are used in learning, and the purpose of the frame questions can be to search for questions, store wrong questions, knowledge points and the like. The frame selection instruction is initiated by a user, that is, an operator, and may be a voice instruction, or a pulse instruction formed by triggering by the user through a touch screen or a mechanical button, or formed by a specific gesture, and the like, which is not limited herein. Before the frame selection instruction is received, the camera and most parts of the intelligent equipment are in a sleep state, so that the electric quantity can be saved, and the camera and the intelligent equipment are awakened through the frame selection instruction.
The camera is the leading camera of smart machine, of course, also can be the camera of outside setting and with smart machine through wired or wireless mode communication. The bearing body is a carrier of contents to be framed and can be books, exercise books, test papers and the like. The operation body is used for operating on the bearing body, and further positioning of the frame selection content is achieved according to the position of the positioning operation body, and the operation body can be a finger, a touch pen, a pencil and the like, or even a small stick, and is not limited here.
By taking the front camera as an example, the bearing body is arranged on the front side of the intelligent device, and the intelligent device and the bearing body form a certain included angle, so that the front camera can shoot all or most of contents of the bearing body. The acquisition of the position coordinates of the operating body on the carrier and the gaze area of the operator's eye on the carrier can be carried out by a camera, for example a wide-angle camera.
In a preferred embodiment of the present invention, different cameras are used to determine the operator positioning and eye tracking information.
Referring to fig. 2, acquiring the position coordinates of the operating body on the carrier by using the camera may include:
1101. and acquiring a preview image of the carrier with the operating body through the first camera.
In the embodiment of the present invention, there is at least one operating body operating on the carrier, at least one of the operating bodies is in point contact with the carrier, and the final photographing contents are selected by frame based on the coordinates of the contact point, which is not considered in the embodiment of the present invention if all the operating bodies are placed in the carrier.
The preview image does not need to be photographed and stored, and can be displayed or not displayed in a display interface of the intelligent equipment. The preview image is pre-processed by the smart device, and the pre-processing includes but is not limited to: denoising, contrast enhancement, shape correction and the like.
1102. And determining the image coordinates of the operation body in the preview image.
And if one or more operation bodies have a plurality of contact points with the bearing body, the image coordinates of the operation bodies have a plurality of continuous points.
There are various ways to acquire the image coordinates of the operation body, for example, a carrier image sample with the operation body may be trained in a machine learning manner, the features of the operation body in the target image may be identified through the training, and the image coordinates of the operation body may be obtained based on the identified features of the operation body. In practical operation, different operators may need to establish a neural network model for training.
In a preferred embodiment of the present invention, the operation body is identified by using a color difference between the operation body and the content on the carrier.
The color difference recognition may be implemented by color conversion, for example, performing graying processing on the preview image, performing graying processing on all contents in the preview image, and then distinguishing the operation body according to the difference between the grayscale value of the operation body and the grayscale value of the contents on the supporting body, generally speaking, the pixel point occupied by the operation body in the preview image is much less than the content on the supporting body, that is, an object in the preview image, which occupies the grayscale value of the minimum pixel point, is the operation body.
Before color conversion is performed, the background and the foreground can be distinguished to extract a foreground image, so that the influence of the background image on the identification of an operation body is prevented, and illustratively, a self-adaptive background template can be obtained in a background Gaussian modeling mode to further extract a foreground image. Of course, in general, the background color is single and the gray value tends to be 255, so that an image with a preset value smaller than a certain gray value (e.g., 200) may also be extracted as the foreground image.
If the color of the operator is distinguished from the color of the content on the carrier more greatly, the operator can be identified by means of a binarized image, i.e. a reference point for the gray value is set, for example, the content of the carrier generally tends to be black, an image with a gray value smaller than a certain value, for example 100, can be set to 0, and an image with a gray value greater than or equal to a certain value can be set to 1.
If the Color of the operator is too close to the Color of the content on the bearer, the RGB image of the preview image may be converted into an 11-dimensional multi-Color space by using a Color space conversion matrix provided by a multi-Color space CN (Color Names) algorithm, and then distinguished according to the refined multi-Color space to identify the operator.
When the operating body in the form of a stylus, a pencil, a small stick, or the like is used in the embodiment of the present invention, point contact is generally directly adopted, that is, only the position coordinates of the end portion of the operating body are detected, where the end portion position refers to an end portion far away from the handheld portion of the operating body, that is, the operating body, the palm, and the content of the supporting body are detected to be distinguished, and then the position coordinates of the end portion of the operating body far away from the palm are determined, where the position coordinates of the end portion are the position coordinates of the operating body.
In the embodiment of the present invention, the operating body implemented by a finger may be in a point contact manner, or may be implemented by placing the whole finger on the carrier as shown in fig. 4, in this case, the fingertip and the content of the carrier are detected and distinguished, and then the determined fingertip coordinate is regarded as the position coordinate of the operating body. Thus, when using fingers, an operation mode may occur in which both hands are placed on the carrier, for example, one hand presses the carrier to smooth the carrier, the other hand points to the frame content, and the other hand may contact with the carrier during the guiding process, so that a plurality of fingers are presented in the preview image.
1103. And determining the position coordinates of the operation body by using the image coordinates.
The coordinates of the pixel points are converted into world coordinates, and the realization mode of the method is various.
Illustratively, this may be done by a coordinate conversion algorithm, which is shown in equation (1):
Figure BDA0002477284360000061
wherein s is a proportionality coefficient, (u, v) is an image coordinate, (x, y, z) is a position coordinate, M and P are an internal reference matrix and an external reference matrix of the camera respectively, and the internal reference matrix and the external reference matrix are fixed values under the condition that the placement mode of the intelligent equipment is determined.
The mapping relation between the image coordinate and the position coordinate (the coordinate in the z-axis direction can be ignored) can be established in a mapping mode, namely in the case that the placement mode of the intelligent device is determined.
Referring to fig. 3, acquiring a gaze area of an eyeball of an operator on the carrier by using a camera includes:
1111. and starting the second camera to acquire the face image of the operator.
The position of the second camera is not limited, and all or most of the features of the face can be shot as a reference.
1112. Determining a location of a pupil center in the facial image.
The manner in which the pupil center location is determined may be various. Illustratively, the features of the face image are extracted by a machine learning method such as a convolutional neural network, so as to obtain the position of the pupil center in the face image. The pupil center position can also be determined by training a cascade classifier for human eye recognition based on an Adaboost algorithm and tracking the human eye feature points by combining an ASM algorithm.
1113. Determining an offset of the location of the pupil center relative to a reference point.
The amount of shift here is a position of the pupil center when the operator looks straight in the horizontal direction, and when the operator deviates from the reference point, the line of sight shifts in another direction, and based on the straight-view direction and the shift position, the line of sight direction at the shift position can be specified.
For example, determining the offset position may construct an eye appearance by using the feature points of the pupil, determine a position of the pupil center when looking straight, and record the position as the first position, and record the position of the pupil center obtained in step 1111 as the second position, where a variation amount of the second position with respect to the first position is the offset amount.
1114. And determining the sight line direction of the eyeball and the fixation point on the bearing body based on the offset.
For example, the mapping relationship between the offset and the gaze direction and the gaze point may be directly established according to the measurement data, the gaze direction may be determined based on the offset, and the gaze point may be determined based on the gaze direction.
For example, training data may be established in advance, training may be performed by using a training sample corresponding to a classifier, and a mapping relationship between the offset and the gaze direction and the gaze point may be established. After that, the classifier can be used for directly classifying to obtain the sight line direction and the fixation point. The training sample can be used for training corresponding to the classifier, the mapping relation between the offset and the watching area is established, and then the classifier can be used for directly classifying to directly obtain the watching area.
Illustratively, the gaze point may also be determined by means of an auxiliary light source, in which case step 1113 may be omitted. For example, a plurality of auxiliary light sources such as near-infrared light sources illuminate a human eye, purkinje light spots of the auxiliary light sources exist in a face image, and a mapping relation among the face image, the eye and a carrier is established based on cross ratio invariance by using the purkinje light spots and the pupil center position, so that a fixation point of an eyeball on the carrier is obtained.
After the gazing point is obtained, the position of the gazing point can be corrected according to the face orientation of the face image, and the correction can be realized by adding the feature of the face orientation in the mapping relation, so that more accurate gazing point information can be obtained.
1115. And determining a gazing area by taking the gazing point as a center.
After the gazing point position is obtained, the gazing area can be determined by taking the gazing point as the center. The gaze area may be regular or irregular in shape, sized as desired. For example, a circular or square gaze area may be provided.
120. Performing a first operation when there is only one of the position coordinates within the gazing zone; the first operation is to photograph the content on the bearer based on the position coordinates and a preset rule.
Based on the description in step 110, on the premise of dealing with the point contact implementation frame problem, when there is a position coordinate in the gazing area, it is described that the eye movement is the same as the operation body guide, the frame problem operation may be performed, and whether there is an operation body outside the gazing area is not considered, of course, in some cases, if the operation body outside the gazing area (the operation body may not be considered as the operation body, and is defined as the auxiliary body) affects the preset rule to photograph the content of the bearer, the smart device may issue an interaction instruction, and exemplarily, may issue a voice: "please move the auxiliary body of the occlusion question"; or performing an auxiliary body removing operation on the shot content with the auxiliary body.
Fig. 4 shows an operation diagram of pointing a finger to a frame selection area, wherein the hand 210 is used to press the carrier 240, the finger 221 of the hand 220 is used to locate the frame selection area, if the gaze area 232 formed by the gaze point 231 of the eyeball 230 covers a fingertip portion of the finger 221, and one and only one coordinate position 2211 exists in the gaze area shown in fig. 4, the coordinate position of the fingertip of the finger 221 is the location point of the frame selection area, and the content 241 on the carrier is framed, i.e. photographed, according to the fingertip coordinate position and a preset rule.
There are various ways of taking pictures of the content on the carrier based on the position coordinates and preset rules. Illustratively, the preset rule is to frame a question on the upper side of the operating body, then based on the preview image, according to the target position coordinates (single position coordinates in the gazing area), a first horizontal line is set, the first horizontal line is parallel to the upper edge or the lower edge of the supporting body, then a test question number is found, a second horizontal line is set on the upper side of the test question number, the second horizontal line is parallel to the first horizontal line, and an area between two parallel lines in the preview image is selected for taking a picture. The second horizontal line may also be set based on the interval between two questions (the question interval is generally greater than the line interval).
As an alternative, in step 120, if the gazing area is not in the carrier, or the position coordinates are not detected, or no position coordinates or two or more position coordinates are present in the gazing area, the second operation is performed.
The second operation may be not to do any operation, or may be to send an alert or voice command. The second operation may be, for example, a voice instruction that cannot recognize the frame selection content, or a voice mutual instruction, for example, if the gazing area is not on the bearer, the smart device may emit a voice: "please put the eyes on the carrier"; if the location coordinates are not detected, the smart device may speak: "please confirm the boxed content with the finger"; if no position coordinate exists in the gazing area, the intelligent device can send out voice: "please put the gaze on the operating body" or "please put the operating body within the gaze fixation range"; if two or more location coordinates exist within the gazing zone, the smart device may speak: "please remove the auxiliary body".
By implementing the embodiment of the invention, the frame selection area can be obtained by combining the operation body positioning technology and the eyeball tracking technology aiming at the condition that a plurality of point-touching operation bodies appear, the phenomenon that the existing multiple operation bodies cannot be identified can be abandoned, the frame question accuracy rate is improved, the frame question efficiency is improved, and on the basis, the user experience can be obviously improved.
Example two
When an operator uses an operation body such as a ruler to place on the carrier for use, position coordinates of a plurality of operation bodies may appear, and the position coordinates of the plurality of operation bodies may partially or completely fall into the gazing area, as shown in fig. 5, in this case, the method for photographing and framing includes:
310. when a frame selection instruction is received, the position coordinates of the operating body on the bearing body and the watching area of the eyeball of the operator on the bearing body are acquired by the camera.
Step 310 is substantially the same as step 110 described in the first embodiment, and is not described herein again.
320. And when a plurality of position coordinates exist in the gazing area, judging whether the position coordinates are continuous points or not.
For the operation body in the form of finger presentation, the position coordinates existing in the gazing area are formed by a plurality of fingers side by side, namely a plurality of fingertips can be detected, at least two fingertips exist in the gazing area, the distance between the fingertips in the gazing area is smaller than a preset distance threshold value, the fingertips are continuous points, and a line segment formed by connecting the continuous points is called a fingertip judgment line segment.
The operating body is in a form of a touch pen, a pencil, a small stick, a ruler and the like, the position coordinates in the gazing area meet the condition that the handheld portion is not located at the end portion of the operating body, the gazing area is not located at the end portion of the operating body, the position coordinates inevitably form continuous points, a line segment formed by the continuous points is located in the length direction of the corresponding operating body, one or more line segments exist in the gazing area, and the whole line segment formed by the line segment or the line segments is regarded as a flat-lying judgment line segment.
330. And if the position coordinates are continuous points, detecting the edge of the bearing body, and acquiring an included angle between the continuous points and any edge of the bearing body.
The edge detection is carried out by the preview image, and the edge detection is also carried out after the preview image is preprocessed.
For example, the angle between the continuous point and any edge of the carrier may be calculated by a slope calculation method, for example, any two points of the continuous point and any two points on the edge may be calculated.
For a fingertip judgment line segment or a flat judgment line segment formed by one line segment, only one included angle exists; for a flat judgment line segment formed by a plurality of line segments as a whole, a plurality of included angles may exist. Therefore, in the preferred embodiment of the present invention, for the convenience of calculation, the flat placement determination line segment formed by the plurality of line segments is excluded before the included angle is obtained.
Illustratively, the exclusion method may be: and if the condition that more than two endpoints exist in the gazing area is detected, directly performing a second operation without calculating the included angle.
340. If the included angle between the continuous point and one edge of the bearing body is smaller than or equal to a preset included angle, executing a first operation; the first operation is to photograph the content on the bearer based on the continuous points and a preset rule.
When the condition that only one included angle exists can exist, the included angle is compared with a preset included angle, and the preset included angle is set according to needs and can be 5 degrees, for example. That is, when the included angle is smaller than or equal to the preset included angle, the judgment line segment can be determined to be approximately parallel to one edge of the bearing body, the frame selection area can be determined according to the continuous points and the preset rule, and the content in the frame selection area is photographed.
The predetermined rule is different from the first embodiment, where the frame area of the predetermined rule is generally a part or all of the area between the continuous point and the edge parallel or approximately parallel to the continuous point. Exemplarily, a straight line where the continuous points are located is a first straight line, and for the first straight line parallel or approximately parallel to the upper edge or the lower edge of the carrier, the first straight line is defined as a horizontal line, and the frame selection area is a part or all of the area above or below the horizontal line; for a first straight line parallel or approximately parallel to the left or right edge of the carrier, the first straight line is defined as a vertical line, and the boxed region is a part or all of the region to the left or right of the vertical line.
Fig. 6 shows a schematic view of a frame selection using a pencil as an operating body placed on a carrier. Wherein, the pencil 410 is used for positioning the frame selection area, if a plurality of position coordinates of the pencil 410 exist in the fixation area 432 formed by the fixation point 431 of the eyeball 430, and the plurality of position coordinates form the continuous point 420; furthermore, in fig. 6, the continuous point 420 is parallel to the upper edge or the lower edge of the carrier 440, and according to step 320, the content 441 on the carrier can be determined to be a frame selection area according to the continuous point and a preset rule, and the content of the frame selection area can be photographed.
Fig. 7 shows a schematic view of a box selection using two rulers as operating bodies placed on a carrier. The ruler 510 and the ruler 520 are placed on the carrier 560 in a crossed manner, a plurality of position coordinates of the ruler 510 and the ruler 520 exist in a gazing area 532 formed by a gazing point 531 of an eyeball 530, and the position coordinates form a continuous point 540 and a continuous point 550 respectively; since there are four end points of the continuous point 540 and the continuous point 550, the second operation is directly performed.
As an optional implementation manner, in step 320, when the gazing area is not located on the carrier, or the position coordinate is not detected, or the position coordinate does not exist in the gazing area, or a plurality of position coordinates exist in the gazing area and the plurality of position coordinates are not continuous points or not complete continuous points, or a plurality of position coordinates exist in the gazing area and the plurality of position coordinates are continuous points but included angles between the continuous points and all edges of the carrier are greater than a preset included angle, a second operation is performed.
The second operation may be not to do any operation, or may be to send an alert or voice command. The second operation may be, for example, a voice instruction that cannot recognize the frame selection content, or may be a voice mutual instruction similar to the embodiment.
By implementing the embodiment of the invention, the frame selection area can be obtained by combining the operation body positioning technology and the eyeball tracking technology aiming at the condition that a plurality of line-contact operation bodies appear, the phenomenon that the existing multi-operation body cannot be identified can be abandoned, the frame question accuracy rate is improved, the frame question efficiency is improved, and on the basis, the user experience can be obviously improved.
EXAMPLE III
Referring to fig. 8, fig. 8 is a schematic structural diagram of a photo frame selection device according to an embodiment of the present invention. As shown in fig. 8, the photographing frame selection apparatus may include:
the acquisition unit 610 is used for acquiring the position coordinates of the operating body on the carrying body and the watching area of the eyeball of the operator on the carrying body by using the camera when the frame selection instruction is received;
an executing unit 620, configured to execute a first operation when there is only one position coordinate in the gazing region; the first operation is to photograph the content on the bearer based on the position coordinates and a preset rule.
As an alternative embodiment, the acquisition unit 610 may include a first unit 611 and a second unit 612, where the first unit 611 is configured to acquire the position coordinates of the operating body on the carrier by using a camera; the second unit 612 is configured to acquire a gaze region of an eyeball of the operator on the carrier body by using the camera.
As an alternative embodiment, the first unit 611 may include:
a first subunit 6111, configured to start the first camera, and obtain a preview image of the carrier with the operation body;
a second subunit 6112, configured to determine image coordinates of the operation body in the preview image;
a third subunit 6113, configured to determine the position coordinates of the operation body by using the image coordinates.
As an alternative embodiment, the second sub-unit 6112 may include: identifying the operation body by using the color difference between the operation body and the content on the carrier; and determining the image coordinates of the operation body in the preview image.
As an alternative embodiment, the second unit 612 may include:
a fourth sub-unit 6121, configured to start the second camera and obtain a face image of the operator;
a fifth sub-unit 6122, configured to determine a position of a pupil center in the face image and an offset amount of the position of the pupil center with respect to a reference point;
a sixth subunit 6123, configured to determine, based on the offset, a gaze direction of the eyeball and a gaze point on the carrier;
a seventh sub-unit 6124, configured to determine a gazing area with the gazing point as a center.
As an alternative embodiment, the fifth sub-unit 6122 may include: inputting the facial image into a convolutional neural network trained in advance to determine the characteristic points of the pupil; determining the position of the pupil center by using the characteristic points of the pupil; constructing the appearance of the eye according to the characteristic points of the pupil, and taking the position of the pupil center when looking straight as a reference point; and calculating the offset of the pupil center position relative to the reference point according to the pupil center position and the reference point position.
As an optional implementation, the system may further include: performing a second operation when the gazing area is not in the carrier, or the position coordinates are not detected, or no position coordinates or two or more position coordinates are present within the gazing area.
The photographing frame selection device shown in fig. 8 can acquire a frame selection area by combining operation body positioning and eyeball tracking technologies according to the condition that a plurality of point-touching operation bodies appear, can abandon the phenomenon that a plurality of operation bodies cannot be identified, improves the accuracy of the frame question, improves the efficiency of the frame question, and obviously improves the user experience on the basis.
Example four
Referring to fig. 9, fig. 9 is a schematic structural diagram of a photo frame selection device according to an embodiment of the present invention. As shown in fig. 9, the photographing frame selection apparatus may include:
the acquisition unit 710 is configured to, when receiving a frame selection instruction, acquire a position coordinate of the operating body on the carrier and a gaze area of an eyeball of the operator on the carrier by using the camera;
a determining unit 720, configured to determine whether the plurality of position coordinates are continuous points when the plurality of position coordinates exist in the gazing area;
the obtaining unit 730 is configured to detect an edge of the carrier if the position coordinates are continuous points, and obtain an included angle between the continuous points and any edge of the carrier;
an executing unit 740, configured to execute a first operation if an included angle between the continuous point and one edge of the carrier is smaller than or equal to a preset included angle; the first operation is to photograph the content on the bearer based on the continuous points and a preset rule.
As an optional implementation, the system may further include: and when the gazing area is not on the bearing body, or the position coordinate is not detected, or the position coordinate does not exist in the gazing area, or a plurality of position coordinates exist in the gazing area, the position coordinates are not continuous points or are not completely continuous points, or a plurality of position coordinates exist in the gazing area, the position coordinates are continuous points, but included angles between the continuous points and all edges of the bearing body are larger than a preset included angle, executing a second operation.
The photographing frame selection device shown in fig. 9 can acquire a frame selection area by combining operation body positioning and eyeball tracking technologies according to the condition that a plurality of point-touching operation bodies appear, can abandon the phenomenon that a plurality of operation bodies cannot be identified, improves the accuracy of frame questions, improves the efficiency of the frame questions, and obviously improves the user experience on the basis.
EXAMPLE five
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. The electronic equipment can be intelligent equipment such as a learning machine, a family education machine, a point reading machine, a tablet computer or a mobile phone. As shown in fig. 10, the electronic device may include:
a memory 810 storing executable program code;
a processor 820 coupled to the memory 810;
the processor 820 calls the executable program code stored in the memory 810 to execute part or all of the steps of the method for photo frame selection in one or both of the embodiments.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute part or all of the steps in the photographing frame selection method in any one of the first embodiment and the second embodiment.
The embodiment of the invention also discloses a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps in the method for photographing and framing in any one of the first embodiment and the second embodiment.
The embodiment of the invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing the computer program product, and when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps in the method for selecting the photo frame in any one of the first embodiment and the second embodiment.
In various embodiments of the present invention, it should be understood that the sequence numbers of the processes do not mean the execution sequence necessarily in order, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the method according to the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
Those of ordinary skill in the art will appreciate that some or all of the steps of the methods of the embodiments may be implemented by hardware instructions associated with a program, which may be stored in a computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM), or other Memory, a CD-ROM, or other disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The method, the apparatus, the electronic device and the storage medium for photo frame selection disclosed in the embodiments of the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (11)

1.一种拍照框选的方法,其特征在于,包括:1. a method for taking a picture frame selection, is characterized in that, comprises: 当接收到框选指令时,利用摄像头获取操作体在承载体上的位置坐标以及操作者的眼球在所述承载体上的注视区域;When receiving the frame selection instruction, use the camera to obtain the position coordinates of the operating body on the carrier and the gaze area of the operator's eyeballs on the carrier; 在所述注视区域内存在且仅存在一个所述位置坐标时,执行第一操作;所述第一操作为基于所述位置坐标以及预设规则对承载体上的内容进行拍照;所述预设规则为对操作体上侧的一道题进行框选;When there is only one of the position coordinates in the gaze area, a first operation is performed; the first operation is to take a picture of the content on the carrier based on the position coordinates and a preset rule; the preset The rule is to select a question on the upper side of the operator; 所述利用摄像头获取操作体在承载体上的位置坐标,包括:The use of the camera to obtain the position coordinates of the operating body on the carrier includes: 启动第一摄像头,获取带有操作体的承载体的预览图像;Start the first camera to obtain a preview image of the carrier with the operating body; 确定预览图像中操作体的图像坐标;Determine the image coordinates of the operator in the preview image; 利用所述图像坐标转换为世界坐标的方式确定所述操作体的位置坐标。The position coordinates of the operating body are determined by converting the image coordinates into world coordinates. 2.根据权利要求1所述的方法,其特征在于,所述确定预览图像中操作体的图像坐标,包括:2. The method according to claim 1, wherein the determining the image coordinates of the operating body in the preview image comprises: 利用所述操作体和承载体上的内容之间的颜色差异识别所述操作体;Identify the operating body using the color difference between the operating body and the content on the carrier; 确定所述操作体在预览图像中的图像坐标。Determine the image coordinates of the operating body in the preview image. 3.根据权利要求1或2所述的方法,其特征在于,所述利用摄像头获取操作者的眼球在所述承载体上的注视区域,包括:3. The method according to claim 1 or 2, wherein the using a camera to obtain the gaze area of the operator's eyeball on the carrier comprises: 启动第二摄像头,获取操作者的脸部图像;Start the second camera to obtain the operator's face image; 确定所述脸部图像中瞳孔中心的位置以及所述瞳孔中心的位置相对于基准点的偏移量;determining the position of the center of the pupil in the face image and the offset of the position of the center of the pupil relative to the reference point; 基于所述偏移量确定所述眼球的视线方向及在所述承载体上的注视点;determining the gaze direction of the eyeball and the gaze point on the carrier based on the offset; 以所述注视点为中心,确定注视区域。With the gaze point as the center, the gaze area is determined. 4.根据权利要求3所述的方法,其特征在于,确定所述脸部图像中瞳孔中心的位置以及所述瞳孔中心的位置相对于基准点的偏移量,包括:4. The method according to claim 3, wherein determining the position of the center of the pupil in the face image and the offset of the position of the center of the pupil relative to the reference point, comprising: 将所述脸部图像输入在先训练的卷积神经网络确定瞳孔的特征点;Inputting the face image into a previously trained convolutional neural network to determine the feature points of the pupil; 利用所述瞳孔的特征点确定瞳孔中心的位置;Use the feature points of the pupil to determine the position of the center of the pupil; 根据所述瞳孔中心的位置以及基准点确定所述瞳孔中心的位置的偏移量。The offset of the position of the pupil center is determined according to the position of the pupil center and the reference point. 5.根据权利要求4所述的方法,其特征在于,根据所述瞳孔中心的位置以及基准点确定所述瞳孔中心的位置的偏移量,包括:5. The method according to claim 4, wherein determining the offset of the position of the pupil center according to the position of the pupil center and a reference point, comprising: 依据所述瞳孔的特征点,构建眼部外观,将瞳孔中心直视时的位置作为基准点;According to the feature points of the pupil, the appearance of the eye is constructed, and the position of the pupil center when looking directly is used as the reference point; 根据所述瞳孔中心的位置和所述基准点的位置,计算所述瞳孔中心位置相对于基准点的偏移量。According to the position of the pupil center and the position of the reference point, the offset of the position of the pupil center relative to the reference point is calculated. 6.根据权利要求1所述的方法,其特征在于,所述方法还包括:6. The method of claim 1, wherein the method further comprises: 当注视区域不在所述承载体、或者未检测到所述位置坐标、或者注视区域内不存在位置坐标或者存在两个或两个以上的位置坐标时,执行第二操作,所述第二操作为不做任何操作、或者发送警示提醒或者发送语音指令。When the gaze area is not on the carrier, or the position coordinates are not detected, or there are no position coordinates or there are two or more position coordinates in the gaze area, a second operation is performed, and the second operation is: Do nothing, or send alerts or voice commands. 7.一种拍照框选装置,其特征在于,其包括:7. A photo frame selection device, characterized in that it comprises: 采集单元,用于当接收到框选指令时,利用摄像头获取操作体在承载体上的位置坐标以及操作者的眼球在所述承载体上的注视区域;an acquisition unit, configured to use a camera to acquire the position coordinates of the operating body on the carrier and the gaze area of the operator's eyeballs on the carrier when receiving the frame selection instruction; 执行单元,用于在所述注视区域内存在且仅存在一个所述位置坐标时,执行第一操作;所述第一操作为基于所述位置坐标以及预设规则对承载体上的内容进行拍照;所述预设规则为对操作体上侧的一道题进行框选;an execution unit, configured to execute a first operation when there is only one position coordinate in the gaze area; the first operation is to take a picture of the content on the carrier based on the position coordinate and a preset rule ; The preset rule is to frame a question on the upper side of the operating body; 所述利用摄像头获取操作体在承载体上的位置坐标,包括:The use of the camera to obtain the position coordinates of the operating body on the carrier includes: 启动第一摄像头,获取带有操作体的承载体的预览图像;Start the first camera to obtain a preview image of the carrier with the operating body; 确定预览图像中操作体的图像坐标;Determine the image coordinates of the operator in the preview image; 利用所述图像坐标转换为世界坐标的方式确定所述操作体的位置坐标。The position coordinates of the operating body are determined by converting the image coordinates into world coordinates. 8.一种电子设备,其特征在于,包括:存储有可执行程序代码的存储器;与所述存储器耦合的处理器;所述处理器调用所述存储器中存储的所述可执行程序代码,用于执行权利要求1至6任一项所述的一种拍照框选的方法。8. An electronic device, comprising: a memory storing executable program codes; a processor coupled to the memory; the processor calling the executable program codes stored in the memory to use A method for frame selection of a photo frame according to any one of claims 1 to 6 is performed. 9.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储计算机程序,其中,所述计算机程序使得计算机执行权利要求1至6任一项所述的一种拍照框选的方法。9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, wherein the computer program enables a computer to execute a photo frame selection according to any one of claims 1 to 6 Methods. 10.一种拍照框选的方法,其特征在于,包括:10. A method for frame selection, comprising: 当接收到框选指令时,利用摄像头获取操作体在承载体上的位置坐标以及操作者的眼球在所述承载体上的注视区域;When receiving the frame selection instruction, use the camera to obtain the position coordinates of the operating body on the carrier and the gaze area of the operator's eyeballs on the carrier; 当所述注视区域内存在多个所述位置坐标,判断所述多个位置坐标是否为连续点;When there are a plurality of the position coordinates in the gaze area, determine whether the plurality of position coordinates are continuous points; 如果所述多个位置坐标为连续点,则检测所述承载体的边缘,并获取所述连续点与承载体任意一边缘的夹角;If the multiple position coordinates are continuous points, detect the edge of the carrier, and obtain the included angle between the continuous point and any edge of the carrier; 如果所述连续点与承载体其中一边缘的夹角小于或等于预设夹角,则执行第一操作;所述第一操作为基于所述连续点以及预设规则对承载体上的内容进行拍照;If the included angle between the continuous point and one of the edges of the carrier is less than or equal to the preset angle, a first operation is performed; the first operation is to perform an operation on the content on the carrier based on the continuous point and a preset rule Photograph; 所述预设规则为将所述连续点与所述连续点平行或近似平行的边缘之间的部分或全部区域作为框选区域。The preset rule is to use part or all of the area between the continuous point and an edge parallel or approximately parallel to the continuous point as a frame selection area. 11.根据权利要求10所述的方法,其特征在于,所述方法还包括:11. The method of claim 10, wherein the method further comprises: 当注视区域不在所述承载体上、或者未检测到所述位置坐标、或者注视区域内不存在位置坐标、或者注视区域内存在多个位置坐标且所述多个位置坐标不是连续点或不全是连续点、或者注视区域内存在多个位置坐标且所述多个位置坐标是连续点但所述连续点与所述承载体所有边缘的夹角均大于预设夹角,则执行第二操作,所述第二操作为不做任何操作、或者发送警示提醒或者发送语音指令。When the gaze area is not on the carrier, or the location coordinates are not detected, or there are no location coordinates in the gaze area, or there are multiple location coordinates in the gaze area and the multiple location coordinates are not consecutive points or not all If there are multiple position coordinates in the continuous point or the gaze area and the multiple position coordinates are continuous points but the included angle between the continuous point and all the edges of the carrier is greater than the preset included angle, the second operation is performed, The second operation is to do nothing, or to send a warning reminder or to send a voice command.
CN202010368339.2A 2020-04-30 2020-04-30 Photographing frame selection method and device, electronic equipment and storage medium Active CN111432131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010368339.2A CN111432131B (en) 2020-04-30 2020-04-30 Photographing frame selection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010368339.2A CN111432131B (en) 2020-04-30 2020-04-30 Photographing frame selection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111432131A CN111432131A (en) 2020-07-17
CN111432131B true CN111432131B (en) 2022-03-08

Family

ID=71552306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010368339.2A Active CN111432131B (en) 2020-04-30 2020-04-30 Photographing frame selection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111432131B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926099B (en) * 2021-04-02 2021-10-26 珠海市鸿瑞信息技术股份有限公司 Management system based on remote control identity authentication

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2843507A1 (en) * 2013-08-26 2015-03-04 Thomson Licensing Display method through a head mounted device
US20150302585A1 (en) * 2014-04-22 2015-10-22 Lenovo (Singapore) Pte. Ltd. Automatic gaze calibration
CN105446673B (en) * 2014-07-28 2018-10-19 华为技术有限公司 The method and terminal device of screen display
CN106814854A (en) * 2016-12-29 2017-06-09 杭州联络互动信息科技股份有限公司 A kind of method and device for preventing maloperation
CN107957779A (en) * 2017-11-27 2018-04-24 海尔优家智能科技(北京)有限公司 A kind of method and device searched for using eye motion control information
CN109409234B (en) * 2018-09-27 2022-08-02 广东小天才科技有限公司 Method and system for assisting students in problem location learning
CN109376737A (en) * 2018-09-27 2019-02-22 广东小天才科技有限公司 Method and system for assisting user in solving learning problem
CN110597450A (en) * 2019-09-16 2019-12-20 广东小天才科技有限公司 False touch prevention identification method and device, touch reading equipment and touch reading identification method thereof

Also Published As

Publication number Publication date
CN111432131A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111753767A (en) Method and device for automatically correcting operation, electronic equipment and storage medium
CN104350509B (en) Quick attitude detector
US10318797B2 (en) Image processing apparatus and image processing method
EP2336949B1 (en) Apparatus and method for registering plurality of facial images for face recognition
CN110532984A (en) Critical point detection method, gesture identification method, apparatus and system
CN109274891B (en) Image processing method, device and storage medium thereof
US10423824B2 (en) Body information analysis apparatus and method of analyzing hand skin using same
CN110353622A (en) A kind of vision testing method and eyesight testing apparatus
US8917957B2 (en) Apparatus for adding data to editing target data and displaying data
CN103472915B (en) reading control method based on pupil tracking, reading control device and display device
CN106778627B (en) Detect the method, apparatus and mobile terminal of face face value
CN110705350B (en) Certificate identification method and device
CN111753715B (en) Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium
CN111753168A (en) A method, device, electronic device and storage medium for searching questions
JP7513451B2 (en) Biometric authentication device and method
CN111432131B (en) Photographing frame selection method and device, electronic equipment and storage medium
JP6825397B2 (en) Biometric device, biometric method and biometric program
WO2018059258A1 (en) Implementation method and apparatus for providing palm decoration virtual image using augmented reality technology
CN115984895A (en) Gesture recognition method and device, computer readable storage medium and terminal
CN113703577B (en) Drawing method, drawing device, computer equipment and storage medium
CN112580409A (en) Target object selection method and related product
CN110443122A (en) Information processing method and Related product
CN111027353A (en) Search content extraction method and electronic equipment
CN111382598A (en) Identification method and device and electronic equipment
CN111027556B (en) Question searching method and learning device based on image preprocessing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant