CN113556438A - Scanning control method, system, electronic device and storage medium - Google Patents
Scanning control method, system, electronic device and storage medium Download PDFInfo
- Publication number
- CN113556438A CN113556438A CN202110661393.0A CN202110661393A CN113556438A CN 113556438 A CN113556438 A CN 113556438A CN 202110661393 A CN202110661393 A CN 202110661393A CN 113556438 A CN113556438 A CN 113556438A
- Authority
- CN
- China
- Prior art keywords
- scanning
- plane
- scanned
- segmented
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/04—Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
- H04N1/047—Detection, control or error compensation of scanning velocity or position
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Environmental & Geological Engineering (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a scanning control method, a system, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a shot plane corresponding to an object to be scanned and a shooting plane corresponding to the camera module; and judging whether the shot plane and the shooting plane are parallel, if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to be parallel. The invention can detect whether the shooting plane of the camera module is parallel to the shot plane in real time, and adjust and correct in time when the shooting plane is not parallel to ensure high-quality scanning effect, thereby improving the recognition rate and precision of text contents in the shot object in the shooting process and facilitating the correction or cutting of the scanned or shot text in the later period; when the size of the scanned text is overlarge, segmented scanning is adopted to obtain a plurality of segmented scanning images, all the segmented scanning images are spliced to obtain a target scanned text image, and the effects of accurate scanning and later-stage restoration of the overlarge text are achieved.
Description
Technical Field
The present invention relates to the field of scan processing technologies, and in particular, to a scan control method, a scan control system, an electronic device, and a storage medium.
Background
In the past, a professional scanner is needed when scanning files or certificates, however, the files or certificates can be scanned only by a mobile phone with a camera, and the background can be automatically removed. However, it is difficult to ensure that the camera of the mobile phone is parallel to the scanned text, and the scanned text is easily tilted and has incomplete content (tilted scanning affects the accuracy of text recognition and affects post correction).
At present, the later correction of mobile phone scanning software only provides manual correction, namely, the scanning effect is adjusted by human eyes. The shooting plane of the mobile phone is not parallel to the text, so that the later correction is difficult (the text is rectangular originally, and the result is irregular quadrangle in shooting). At this moment, in order to ensure the integrity of the text content, the later correction may be performed by manually adjusting the text to be trapezoidal or reducing the text content in order to ensure the shape of the text, but the manual adjustment-based method has the defects of low adjustment precision, untimely adjustment and the like, and cannot meet the actual scanning requirement.
Disclosure of Invention
The invention aims to overcome the defects that in the prior art, a text scanning mode is based on a manual adjustment mode, the adjustment precision is low, the adjustment is not timely, and the actual scanning requirement cannot be met, and provides a scanning control method, a system, electronic equipment and a storage medium.
The invention solves the technical problems through the following technical scheme:
the invention provides a scanning control method, which is applied to an intelligent terminal provided with a camera module, and comprises the following steps:
acquiring a shot plane corresponding to an object to be scanned and a shooting plane corresponding to the camera module;
and judging whether the shot plane and the shooting plane are parallel, if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to be parallel.
Preferably, the step of determining whether the shooting plane and the object plane are parallel to each other, and if not, generating an adjustment instruction to adjust the shooting plane and the object plane to parallel positions includes:
acquiring a text image corresponding to the object to be scanned;
extracting boundary shape information corresponding to a text area in the text image;
judging whether the boundary shape information meets reference rectangle information or not, and if so, determining that the shot plane is parallel to the shooting plane;
if not, determining that the shot plane is not parallel to the shooting plane, carrying out perspective transformation on the boundary shape information to the reference rectangle information to obtain transformation parameters, and adjusting the shot plane and the shooting plane to be parallel to each other according to the transformation parameters.
Preferably, the camera module comprises a first image acquisition camera and three second image acquisition cameras arranged around the first image acquisition camera;
the step of judging whether the shot plane and the shooting plane are parallel or not, and if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to parallel positions comprises the following steps:
acquiring first distance information between the shooting plane and the shot plane where the three second image acquisition cameras are located by adopting a triangular ranging technology based on the first image acquisition cameras and the second image acquisition cameras;
judging whether first difference values between every two pieces of first distance information are smaller than a first set threshold value, and if so, determining that the shot plane is parallel to the shooting plane;
if not, determining that the shot plane is not parallel to the shooting plane, selecting one piece of first distance information as reference distance information, and generating an adjusting instruction to adjust all remaining first distance information to the reference distance information so as to enable the shot plane and the shooting plane to reach parallel positions.
Preferably, the step of selecting one distance information as the reference distance information includes:
and selecting the maximum value in all the first distance information as the reference distance information.
Preferably, the camera module comprises N image acquisition cameras, wherein N is more than or equal to 3 and is an integer;
the step of judging whether the shooting plane is parallel to the shooting plane further comprises the following steps:
judging whether the marked position information corresponding to the vertical incidence point of each image acquisition camera is in the text range of the object to be scanned, if so, executing the step of judging whether the shot plane is parallel to the shooting plane;
and if not, controlling the camera module to move in the horizontal direction until the marking position information corresponding to the vertical incidence point of each image acquisition camera falls into the text range of the object to be scanned.
Preferably, the method further comprises:
acquiring an imaging range of the camera module;
and judging whether the imaging range covers the whole text range of the object to be scanned, if not, controlling the camera module to move in the vertical direction until the imaging range covers the whole text range of the object to be scanned.
Preferably, the camera module further comprises a TOF camera (depth camera);
the method further comprises the following steps:
acquiring second distance information between the image acquisition cameras and the object to be scanned by adopting the TOF camera;
and when the imaging range does not cover the whole text range of the object to be scanned and the second distance information is larger than a second set threshold, determining that the size of the object to be scanned is larger than a set size, and performing segmented scanning on the object to be scanned to acquire a target scanning text image.
Preferably, the step of performing a segmented scanning process on the object to be scanned to obtain a target scanned text image includes:
presetting a scanning size of segmented scanning;
controlling the camera module to perform segmented scanning processing on the object to be scanned according to the scanning size and the set scanning path so as to obtain a plurality of segmented scanning images;
and acquiring time sequence information corresponding to each segmented scanning image, and splicing all the segmented scanning images according to the time sequence information to acquire a target scanning text image.
Preferably, the step of performing a segmented scanning process on the object to be scanned according to the scanning size and the set scanning path to obtain a plurality of segmented scanning images includes:
acquiring third distance information between the plurality of image acquisition cameras and the object to be scanned when each sectional scanning image is acquired by adopting the TOF camera;
acquiring a reserved registration area corresponding to each segmented scanning image;
and when the coincidence degree of the reserved registration area in the current segmented scanning image and the previous segmented scanning image meets a third set threshold value and a second difference value between third distance information corresponding to the two segmented scanning images is smaller than a fourth set threshold value, determining that the segmented scanning is successful, and continuously acquiring the next segmented scanning image until all segmented scanning images corresponding to the object to be scanned are acquired.
Preferably, when the segment scan fails, the method further comprises:
carrying out scaling transformation processing on the current segmented scanning image according to the second difference value so as to scale the adjacent segmented scanning images to the same text scaling;
and performing translation and rotation processing on the current segmented scanned image after the zooming processing until the coincidence degree of the reserved registration area in the current segmented scanned image and the previous segmented scanned image meets the third set threshold value.
Preferably, the method further comprises:
when the object to be scanned corresponds to a plurality of lines of transverse segmented scanning images and the current segmented scanning image reaches the transverse text boundary of the object to be scanned, controlling line feed scanning;
and when the object to be scanned corresponds to a plurality of columns of longitudinal segmented scanning images and the current segmented scanning image reaches the text longitudinal boundary of the object to be scanned, controlling the column change scanning.
Preferably, the reserved registration area corresponds to a transverse reserved registration area and a longitudinal reserved registration area corresponding to the joint of each segmented scanned image and other segmented scanned images.
The invention also provides a scanning control system, which is applied to an intelligent terminal provided with a camera module, and comprises:
the plane acquisition module is used for acquiring a shot plane corresponding to an object to be scanned and a shooting plane corresponding to the camera module;
and the control module is used for judging whether the shot plane and the shooting plane are parallel or not, and if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to be parallel.
Preferably, the control module comprises:
the text image acquisition unit is used for acquiring a text image corresponding to the object to be scanned;
the information extraction unit is used for extracting boundary shape information corresponding to a text area in the text image;
the first judging unit is used for judging whether the boundary shape information meets the reference rectangle information or not, and if so, the first determining unit is called to determine that the shot plane and the shooting plane are parallel; if not, calling the first determining unit to determine that the shot plane is not parallel to the shooting plane;
a transformation processing unit configured to, when the subject plane and the photographing plane are not parallel to each other, perform perspective transformation processing on the boundary shape information to the reference rectangle information to obtain transformation parameters;
a first adjusting unit for adjusting both the photographing plane and the photographing plane to parallel positions according to the transformation parameter.
Preferably, the camera module comprises a first image acquisition camera and three second image acquisition cameras arranged around the first image acquisition camera;
the control module includes:
the first distance information acquisition unit is used for acquiring first distance information between the shooting plane where the three second image acquisition cameras are located and the shot plane by adopting a triangulation distance measurement technology based on the first image acquisition cameras and the second image acquisition cameras;
the second judging unit is used for judging whether first difference values between every two pieces of the first distance information are smaller than a first set threshold value, and if so, the second determining unit is called to determine that the shot plane and the shooting plane are parallel; if not, calling the second determining unit to determine that the shot plane is not parallel to the shooting plane;
a reference information selecting unit for selecting a first distance information as a reference distance information;
a second adjusting unit configured to generate an adjustment instruction to adjust all remaining first distance information to the reference distance information so that both the subject plane and the photographing plane reach parallel positions.
Preferably, the reference information selecting unit is configured to select a maximum value of all the first distance information as the reference distance information.
Preferably, the camera module comprises N image acquisition cameras, wherein N is more than or equal to 3 and is an integer;
the system further comprises:
the first judgment module is used for judging whether the marking position information corresponding to the vertical incidence point of each image acquisition camera is in the text range of the object to be scanned, if so, the control module is called to judge whether the shot plane is parallel to the shooting plane;
if not, calling the control module to control the camera module to move in the horizontal direction until the marking position information corresponding to the vertical incidence point of each image acquisition camera falls into the text range of the object to be scanned.
Preferably, the system further comprises:
the imaging range acquisition module is used for acquiring the imaging range of the camera module;
and the second judgment module is used for judging whether the imaging range covers the whole text range of the object to be scanned, if not, the control module is called to control the camera module to move in the vertical direction until the imaging range covers the whole text range of the object to be scanned.
Preferably, the camera module further comprises a TOF camera;
the system further comprises:
the distance information acquisition module is used for acquiring second distance information between the image acquisition cameras and the object to be scanned by adopting the TOF camera;
the text size determining module is used for determining that the size of the object to be scanned is larger than a set size when the imaging range does not cover the whole text range of the object to be scanned and the second distance information is larger than a second set threshold;
and the segmented scanning processing module is used for carrying out segmented scanning processing on the object to be scanned so as to obtain a target scanning text image.
Preferably, the segment scan processing module includes:
a scanning size presetting unit for presetting a scanning size of the segment scanning;
the sectional scanning image acquisition unit is used for controlling the camera module to perform sectional scanning processing on the object to be scanned according to the scanning size and the set scanning path so as to acquire a plurality of sectional scanning images;
and the target image acquisition unit is used for acquiring the time sequence information corresponding to each segmented scanning image and splicing all the segmented scanning images according to the time sequence information to acquire a target scanning text image.
Preferably, the segmented scanning image obtaining unit includes:
the distance information acquisition subunit is used for acquiring third distance information between the plurality of image acquisition cameras and the object to be scanned when each sectional scanning image is acquired by adopting the TOF camera;
a reserved registration area obtaining subunit, configured to obtain a reserved registration area corresponding to each segmented scanned image;
and the segmented scanning subunit is used for determining that segmented scanning is successful and continuously acquiring the next segmented scanned image until all segmented scanned images corresponding to the object to be scanned are acquired when the coincidence degree of the reserved registration areas in the current segmented scanned image and the previous segmented scanned image meets a third set threshold and a second difference value between third distance information corresponding to the two segmented scanned images is smaller than a fourth set threshold.
Preferably, when the segment scan fails, the system further comprises:
the zooming conversion processing module is used for carrying out zooming conversion processing on the current segmented scanning image according to the second difference value so as to zoom the adjacent segmented scanning images to the same text zooming ratio;
and the translation and rotation processing module is used for performing translation and rotation processing on the zoomed current segmented scanned image until the coincidence degree of the reserved registration area in the current segmented scanned image and the previous segmented scanned image meets the third set threshold.
Preferably, the segment scan processing module further comprises:
the line feed scanning control unit is used for controlling line feed scanning when the object to be scanned corresponds to a plurality of lines of transverse segmented scanning images and the current segmented scanning image reaches the transverse text boundary of the object to be scanned;
and the column-changing scanning control unit is used for controlling column-changing scanning when the object to be scanned corresponds to a plurality of columns of longitudinal segmented scanning images and the current segmented scanning image reaches the text longitudinal boundary of the object to be scanned.
Preferably, the reserved registration area corresponds to a transverse reserved registration area and a longitudinal reserved registration area corresponding to the joint of each segmented scanned image and other segmented scanned images.
The invention also provides an intelligent terminal which comprises the scanning control system.
The invention further provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the scanning control method when executing the computer program.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the scan control method described above.
On the basis of the common knowledge in the field, the preferred conditions can be combined randomly to obtain the preferred embodiments of the invention.
The positive progress effects of the invention are as follows:
in the invention, when the camera module in the intelligent terminal is used for shooting and scanning, whether a shooting plane of the camera module is parallel to a shot plane (such as a plane where a text or a certificate is located) is detected in real time, and when the shooting plane is not parallel to the shot plane, the correction is timely adjusted until the two planes reach the consistent level, so that the high-quality scanning effect is ensured, the recognition rate and the accuracy of text contents in a shot object in the shooting process are improved, and the correction or the cutting of the scanned or shot text at the later stage is facilitated; when the size of the scanned text is too large, segmented scanning is adopted to obtain a plurality of segmented scanning images, all the segmented scanning images are spliced to obtain a target scanned text image, the effects of accurate scanning and later-stage restoration of the oversized text are achieved, the use requirements of users are better met, and the use experience of the users is improved.
Drawings
Fig. 1 is a flowchart of a scan control method according to embodiment 1 of the present invention.
Fig. 2 is a schematic structural diagram of three camera modules in embodiment 2 of the present invention.
Fig. 3 is a first flowchart of a scan control method according to embodiment 2 of the present invention.
Fig. 4 is a schematic structural diagram of a binocular camera module in embodiment 2 of the present invention.
Fig. 5 is a second flowchart of the scan control method according to embodiment 2 of the present invention.
Fig. 6 is a schematic view of a first scene corresponding to the binocular camera module in embodiment 2 of the present invention.
Fig. 7 is a schematic view of a second scene corresponding to the binocular camera module in embodiment 2 of the present invention.
Fig. 8 is a schematic diagram of a third scene corresponding to the binocular camera module in embodiment 2 of the present invention.
Fig. 9 is a flowchart of a scan control method according to embodiment 3 of the present invention.
Fig. 10 is a first schematic diagram of segment scanning in the scanning control method according to embodiment 3 of the present invention.
Fig. 11 is a second schematic diagram of segment scanning in the scanning control method according to embodiment 3 of the present invention.
Fig. 12 is a third schematic diagram of segment scanning in the scanning control method according to embodiment 3 of the present invention.
Fig. 13 is a fourth schematic diagram of segment scanning in the scanning control method according to embodiment 3 of the present invention.
Fig. 14 is a block diagram of a scan control system according to embodiment 4 of the present invention.
Fig. 15 is a block diagram of a scan control system according to embodiment 5 of the present invention.
Fig. 16 is a block diagram of a scan control system according to embodiment 6 of the present invention.
Fig. 17 is a schematic structural diagram of an electronic device implementing a scan control method according to embodiment 7 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
The method of the embodiment is applied to the intelligent terminal provided with the camera module.
As shown in fig. 1, the scan control method of the present embodiment includes:
s101, acquiring a shot plane corresponding to an object to be scanned and a shooting plane corresponding to a camera module;
s102, judging whether the shooting plane and the shooting plane are parallel or not, and if not, generating an adjusting instruction to adjust the shooting plane and the shooting plane to be parallel.
In the embodiment, an object to be scanned (such as a text or a certificate) does not need to be placed on a desktop isoplanar parallel to the ground, the scanning effect does not incline even if the text or the certificate and the like are shot in an inclined mode, the limitation on a scanning scene is weakened, the scanning efficiency and the scanning precision are effectively improved, and the use experience of a user is also improved.
In this embodiment, the shooting plane of module of making a video recording of real-time detection is parallel with the plane of being shot whether to in time adjust and correct in order to guarantee high-quality scanning effect when nonparallel, has promoted recognition rate and precision to the text content in the object of being shot among the shooting process, is convenient for to the correction or the tailor of scanning or shooting text later stage, can satisfy user's user demand better, has promoted user's use and has experienced.
Example 2
The scan control method of the present embodiment is a further improvement of embodiment 1, and specifically:
in a scheme that can be implemented, the camera module includes N image capture cameras, N is greater than or equal to 3 and is an integer.
Taking three shots, that is, N is 3 as an example, when a camera module of an intelligent terminal (such as a smart phone) is used for shooting or scanning, as shown in fig. 2, the camera module is three shots (that is, three image capturing cameras) and the three shots are not on the same straight line. In order to ensure that a better scanning effect is achieved, the three-shot simultaneous imaging can be set or controlled through a program, so that the recognition efficiency and the precision of the three-shot scanning are improved.
As shown in fig. 3, after step S101 and before step S102, the method further includes:
s10101, judging whether the marked position information corresponding to the vertical incidence point of each image acquisition camera is in the text range of the object to be scanned, if so, executing the step S102; if not, executing S10102;
s10102, controlling the camera module to move in the horizontal direction until the marking position information corresponding to the vertical incidence point of each image acquisition camera falls into the text range of the object to be scanned.
In order to ensure the scanning quality of the shot text, the vertical incidence points of three shots need to be ensured to be all in the text range, if the three shots are identified to be not in the text range, the position of the camera module needs to be automatically adjusted until the three shots are all in the text range; or generating reminding information to prompt the user to move the intelligent terminal until the mobile intelligent terminal is in the text range. When the three shots are simultaneously plotted on the terminal equipment, the central positions of the three shots are automatically marked in the imaging area, so that whether the vertical incidence points of the three shots are all in the text range or not can be automatically identified, and meanwhile, the user can manually adjust the central positions of the three shots to fall into the text range.
In an embodiment, the method of this embodiment further includes:
acquiring an imaging range of the camera module;
and judging whether the imaging range covers the text range of the whole object to be scanned, if not, controlling the camera module to move in the vertical direction until the imaging range covers the text range of the whole object to be scanned.
The scanning control method of the embodiment is suitable for a scene that an object to be scanned, such as a certificate or a text, is in an imaging range of three shots, and the certificate or the text is ensured to be in the imaging range of three shots by adjusting the vertical distance between the intelligent terminal and the text when the certificate or the text is small or the certificate or the text is large.
Further, step S102 includes:
s1021, collecting a text image corresponding to an object to be scanned;
s1022, extracting boundary shape information corresponding to the text area in the text image;
s1023, judging whether the boundary shape information meets the reference rectangle information, if so, executing a step S1024; if not, executing step S1025;
s1024, determining the parallelism between the shooting plane and the shooting plane;
s1025, determining that the shot plane is not parallel to the shooting plane, carrying out perspective transformation processing on the boundary shape information to reference rectangle information to obtain transformation parameters, and adjusting the shot plane and the shooting plane to be parallel according to the transformation parameters.
The method comprises the steps that trapezoid or other irregular graphs corresponding to shooting scanning are subjected to perspective transformation processing, the graphs are transformed into corresponding reference rectangles (or other preset graphs), corresponding transformation parameters (including but not limited to rotation parameters and translation parameters) are obtained, and a shot plane and a shooting plane are automatically adjusted to be parallel according to the transformation parameters; or generating reminding information to prompt the user to manually adjust the shot plane and the shooting plane to the parallel position according to the change parameters. The distance between the shot plane and the shooting plane is automatically adjusted according to the conversion parameters, the intelligent terminal or the shot text is driven and adjusted according to the position of the hardware structure, and the specific implementation of the hardware structure belongs to the conventional technology in the field, so that the detailed description is omitted here.
In an embodiment, as shown in fig. 4, the camera module includes a first image capturing camera and three second image capturing cameras disposed around the first image capturing camera, that is, binocular cameras, and three peripheral cameras have the capability of capturing images in a time-sharing and rotation manner.
As shown in fig. 5, step S102 includes:
s1026, acquiring first distance information between a shooting plane and a shot plane, where the three second image acquisition cameras are located, by adopting a triangular ranging technology based on the first image acquisition camera and the second image acquisition cameras;
s1027, judging whether first difference values between every two first distance information are smaller than a first set threshold value, if yes, executing step S1028; if not, execute S1029;
s1028, determining parallelism between a shooting plane and a shooting plane;
s1029, determining that the shot plane is not parallel to the shooting plane, selecting one piece of first distance information as reference distance information, and generating an adjusting instruction to adjust all the remaining first distance information to the reference distance information so as to enable the shot plane and the shooting plane to reach parallel positions.
Preferably, the maximum value in all the first distance information is selected as reference distance information to ensure that the vertical incidence point of the camera is always in the text range in the adjusting process; of course, other distance information may be selected as the reference distance information according to the actual area requirement.
The following is a detailed description with reference to examples:
as shown in fig. 6, L1, L2, and L3 respectively represent vertical distances from three second image capturing cameras to a captured text, and when the vertical distances between each two of L1, L2, and L3 are equal or have a difference smaller than a certain threshold, it is determined that a captured plane is parallel to a capturing plane; otherwise, the shooting plane and the shooting plane are not parallel, namely the three lines are equal in length under the parallel condition, and the three lines are unequal in length under the non-parallel condition.
As shown in fig. 7, at this time, it is necessary to automatically adjust the position between the subject plane and the shooting plane until the vertical distances between L1, L2, and L3 are all equal or the difference is less than a certain threshold.
Specifically, as shown in fig. 8, a camera farthest from the vertical distance of the photographed text or document in the three shots is recognized as a reference distance, and this camera is recognized. Assuming that the camera 1 is the farthest distance, the position of the mobile phone is adjusted in the direction of the arrows of the cameras 2 and 3 in sequence (the adjustment direction can be marked in the view frame), and the adjustment is the same as or less than a certain error according to the distance between the cameras 2 and 3 and the camera 1 to the text.
In this embodiment, the shooting plane of module of making a video recording of real-time detection is parallel with the plane of being shot whether to in time adjust and correct in order to guarantee high-quality scanning effect when nonparallel, has promoted recognition rate and precision to the text content in the object of being shot among the shooting process, is convenient for to the correction or the tailor of scanning or shooting text later stage, can satisfy user's user demand better, has promoted user's use and has experienced.
Example 3
The scan control method of the present embodiment is a further improvement of embodiment 2, and specifically:
the scanning control method of the embodiment is suitable for the situation that the size of an object to be scanned, such as a certificate or a text, is too large, at the moment, the shot text is not clear due to the fact that the vertical distance between the intelligent terminal and the text is too far, and segmented scanning, distance measurement and later-stage image splicing are required to complete scanning of the oversized text.
The camera module of this embodiment further includes a TOF camera (as shown in fig. 2) for collecting or recording the shooting distance between the shooting plane and the shooting plane, so as to correct the oversized text.
Specifically, as shown in fig. 9, step S102 is followed by:
s103, acquiring second distance information between the plurality of image acquisition cameras and an object to be scanned by adopting the TOF camera;
s104, when the imaging range does not cover the text range of the whole object to be scanned and the second distance information is larger than a second set threshold, determining that the size of the object to be scanned is larger than the set size;
and S105, carrying out segmented scanning processing on the object to be scanned to acquire a target scanning text image.
Specifically, step S105 includes:
presetting a scanning size of segmented scanning;
controlling a camera module to perform segmented scanning processing on an object to be scanned according to the scanning size and a set scanning path so as to obtain a plurality of segmented scanning images;
and acquiring time sequence information corresponding to each segmented scanning image, and splicing all the segmented scanning images according to the time sequence information to acquire a target scanning text image.
In an implementation scheme, the step of performing a segmented scanning process on the object to be scanned according to the scanning size and the set scanning path to obtain a plurality of segmented scanning images includes:
acquiring third distance information between the multiple image acquisition cameras and an object to be scanned when each sectional scanning image is acquired by adopting the TOF cameras;
acquiring a reserved registration area corresponding to each segmented scanning image;
and when the coincidence degree of the reserved registration area in the current segmented scanning image and the previous segmented scanning image meets a third set threshold value and a second difference value between third distance information corresponding to the two segmented scanning images is smaller than a fourth set threshold value, determining that the segmented scanning is successful, and continuously acquiring the next segmented scanning image until all segmented scanning images corresponding to the object to be scanned are acquired.
When the object to be scanned corresponds to a plurality of lines of transverse segmented scanning images and the current segmented scanning image reaches the transverse text boundary of the object to be scanned, line feed scanning is controlled;
and when the object to be scanned corresponds to a plurality of columns of longitudinal segmented scanning images and the current segmented scanning image reaches the text longitudinal boundary of the object to be scanned, controlling the column change scanning.
The reserved registration area corresponds to a transverse reserved registration area and a longitudinal reserved registration area corresponding to the joint of each segmented scanning image and other segmented scanning images.
In an aspect of an embodiment, the method further comprises:
carrying out scaling transformation processing on the current segmented scanning image according to the second difference value so as to scale the adjacent segmented scanning images to the same text scaling;
and performing translation and rotation processing on the current segmented scanned image after the zooming processing until the coincidence degree of the reserved registration area in the current segmented scanned image and the previous segmented scanned image meets a third set threshold value.
The working principle of the segmented scanning is specifically described below by taking the transverse segmented scanning as an example:
(1) referring to fig. 10 and 11, a first picture a is scanned from the top left corner of a shot text according to a preset scanning size of segmented scanning, a parallel shooting scheme, that is, a TOF camera is used for distance detection during shooting, that is, during scanning, distance data between a plurality of image acquisition cameras and an object to be scanned are recorded and acquired, timing information of a current image is recorded, and a 10% area on the right side of a shot picture, that is, a segmented scanning image, is extracted as a transverse reserved registration area S1, so that reference is provided for a next transverse scanning shot image, and information is provided for later image registration and splicing; when the second segment scan B needs to be shot and scanned, the mobile phone is moved to the right, and the right 10% of pictures provided by the TOF and the first segment scan are taken as references (the user needs to take the picture containing the first right 10% of the pictures as far as possible); and when the difference value of the distance information of the two transversely adjacent segmented scanned images is smaller than a certain range and the contact ratio of the transversely reserved registration area meets a set value, if the registration of the overlapping area S2(S2 is not more than S1) approximately meets the following transformation matrix, determining that the shooting of the current segmented scanning is finished, and continuing the scanning of the next segmented image.
[1 0 0 0 1 0 0 0 1]
And so on until the current segmented scanning image reaches the transverse boundary of the shot text when transverse segmented scanning occurs, as shown in fig. 12, controlling line feed scanning; of course, the occupation ratio of the reserved registration area, the set value required to be satisfied by the contact ratio, the distance difference between two adjacent segmented scanning images, and the like can be specifically set and adjusted according to the actual situation.
Referring to fig. 13, for the next line of scanning, the longitudinal reserved registration area also needs to be extracted in advance at the longitudinal joint of two segmented scanned images adjacent to each other up and down; and when the difference value of the distance information of the two adjacent segmented scanning images is smaller than a certain range and the contact ratio of the longitudinally reserved registration area S3 meets a set value, determining that the current segmented scanning shooting is finished, and continuing to scan the next segmented image of the current line. And repeating the steps until all segmented scanning images corresponding to the text to be scanned are obtained.
In addition, when the difference value of the distance information between two adjacent segmented scanning images and the matching result do not meet the requirements, the current segmented scanning image is subjected to scaling transformation so as to scale the adjacent segmented scanning images to the same text scaling; and then, carrying out translation and rotation processing on the segmented image until the coincidence degree of the reserved registration area in the current segmented scanned image and the previous segmented scanned image meets the requirement.
Based on the process, two segmented scanned images adjacent in time sequence are sequentially spliced according to the time sequence information (transversely spliced according to a line from left to right, and longitudinally spliced by switching the next line from left to right), namely, the segmented scanned image with the time sequence information behind is spliced by taking the image with the time sequence information ahead as a reference; in addition, the subsequent segmented scanned images of the sequence information are transformed according to the requirement, scaling transformation is carried out according to the difference value of the distance information of the two segmented scanned images, and accurate registration is carried out according to the translational and rotational transformation matrix obtained by shooting; and finishing the correction, registration and splicing of all the segmented scanning images, and finally finishing the scanning result corresponding to the oversized text.
For the longitudinal segment scanning, the working principle of the corresponding segment scanning is similar to that of the transverse segment scanning, and therefore, the detailed description is omitted here.
The oversized text scanning method in the embodiment has the advantages that the corresponding scanning quality is high, the later correction speed is high, the oversized text scanning method is fully automatically generated during shooting, manual correction is not needed, and the correction efficiency and accuracy are guaranteed; in addition, in the process of registering images, two complete images do not need to be brought into registration, but partial registration areas of two adjacent segmented scanning images are registered, and only translation and rotation variables need to be obtained through registration, so that the registration speed is improved.
In the embodiment, when the camera module in the intelligent terminal is used for shooting and scanning, whether a shooting plane of the camera module is parallel to a shot plane (such as a plane where a text or a certificate is located) is detected in real time, and when the shooting plane is not parallel to the shot plane, the shooting plane and the shot plane are adjusted and corrected in time until the two planes reach the same level, so that a high-quality scanning effect is ensured, the recognition rate and the precision of text contents in a shot object in the shooting process are improved, and the later correction or cutting of the scanned or shot text is facilitated; when the size of the scanned text is too large, segmented scanning is adopted to obtain a plurality of segmented scanning images, all the segmented scanning images are spliced to obtain a target scanned text image, the effects of accurate scanning and later-stage restoration of the oversized text are achieved, the use requirements of users are better met, and the use experience of the users is improved.
Example 4
The scanning control system of the embodiment is applied to the intelligent terminal provided with the camera module.
As shown in fig. 14, the scan control system of the present embodiment includes:
the plane acquisition module 1 is used for acquiring a shot plane corresponding to an object to be scanned and a shooting plane corresponding to the camera module;
and the control module 2 is used for judging whether the shot plane and the shooting plane are parallel or not, and if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to be parallel.
In the embodiment, an object to be scanned (such as a text or a certificate) does not need to be placed on a desktop isoplanar parallel to the ground, the scanning effect does not incline even if the text or the certificate and the like are shot in an inclined mode, the limitation on a scanning scene is weakened, the scanning efficiency and the scanning precision are effectively improved, and the use experience of a user is also improved.
In this embodiment, the shooting plane of module of making a video recording of real-time detection is parallel with the plane of being shot whether to in time adjust and correct in order to guarantee high-quality scanning effect when nonparallel, has promoted recognition rate and precision to the text content in the object of being shot among the shooting process, is convenient for to the correction or the tailor of scanning or shooting text later stage, can satisfy user's user demand better, has promoted user's use and has experienced.
Example 5
The scanning control system of the present embodiment is a further improvement of embodiment 4, specifically:
in a scheme that can be implemented, the camera module includes N image capture cameras, N is greater than or equal to 3 and is an integer.
Taking three shots, that is, N is 3 as an example, when a camera module of an intelligent terminal (such as a smart phone) is used for shooting or scanning, as shown in fig. 2, the camera module is three shots (that is, three image capturing cameras) and the three shots are not on the same straight line. In order to ensure that a better scanning effect is achieved, the three-shot simultaneous imaging can be set or controlled through a program, so that the recognition efficiency and the precision of the three-shot scanning are improved.
As shown in fig. 15, the system of the present embodiment further includes:
the first judging module 3 is used for judging whether the marking position information corresponding to the vertical incidence point of each image acquisition camera is in the text range of the object to be scanned, if so, the control module 2 is called to judge whether the shot plane is parallel to the shooting plane;
if not, the control module 2 is called to control the camera module to move in the horizontal direction until the marking position information corresponding to the vertical incidence point of each image acquisition camera falls into the text range of the object to be scanned.
In order to ensure the scanning quality of the shot text, the vertical incidence points of three shots need to be ensured to be all in the text range, if the three shots are identified to be not in the text range, the position of the camera module needs to be automatically adjusted until the three shots are all in the text range; or generating reminding information to prompt the user to move the intelligent terminal until the mobile intelligent terminal is in the text range. When the three shots are simultaneously plotted on the terminal equipment, the central positions of the three shots are automatically marked in the imaging area, so that whether the vertical incidence points of the three shots are all in the text range or not can be automatically identified, and meanwhile, the user can manually adjust the central positions of the three shots to fall into the text range.
In an embodiment, the system of this embodiment further includes:
the imaging range acquisition module 4 is used for acquiring the imaging range of the camera module;
and the second judging module 5 is used for judging whether the imaging range covers the text range of the whole object to be scanned, if not, the control module 2 is called to control the camera module to move in the vertical direction until the imaging range covers the text range of the whole object to be scanned.
The scanning control method of the embodiment is suitable for a scene that an object to be scanned, such as a certificate or a text, is in an imaging range of three shots, and the certificate or the text is ensured to be in the imaging range of three shots by adjusting the vertical distance between the intelligent terminal and the text when the certificate or the text is small or the certificate or the text is large.
In an embodiment, the control module 2 of the present embodiment includes:
the text image acquisition unit is used for acquiring a text image corresponding to an object to be scanned;
the information extraction unit is used for extracting boundary shape information corresponding to a text area in the text image;
the first judging unit is used for judging whether the boundary shape information meets the reference rectangle information or not, and if so, the first determining unit is called to determine the parallelism between the shot plane and the shooting plane; if not, calling a first determining unit to determine that the shooting plane is not parallel to the shooting plane;
a transformation processing unit for subjecting the boundary shape information to perspective transformation processing to reference rectangle information to acquire transformation parameters when the photographic plane and the photographic plane are not parallel;
and a first adjusting unit for adjusting both the photographing plane and the photographing plane to parallel positions according to the conversion parameter.
The method comprises the steps that trapezoid or other irregular graphs corresponding to shooting scanning are subjected to perspective transformation processing, the graphs are transformed into corresponding reference rectangles (or other preset graphs), corresponding transformation parameters (including but not limited to rotation parameters and translation parameters) are obtained, and a shot plane and a shooting plane are automatically adjusted to be parallel according to the transformation parameters; or generating reminding information to prompt the user to manually adjust the shot plane and the shooting plane to the parallel position according to the change parameters. The distance between the shot plane and the shooting plane is automatically adjusted according to the conversion parameters, the intelligent terminal or the shot text is driven and adjusted according to the position of the hardware structure, and the specific implementation of the hardware structure belongs to the conventional technology in the field, so that the detailed description is omitted here.
In an embodiment, as shown in fig. 4, the camera module includes a first image capturing camera and three second image capturing cameras disposed around the first image capturing camera, that is, binocular cameras, and three peripheral cameras have the capability of capturing images in a time-sharing and rotation manner.
The control module 2 in this embodiment includes:
the first distance information acquisition unit is used for acquiring first distance information between shooting planes and shot planes corresponding to the three second image acquisition cameras based on the first image acquisition cameras and the second image acquisition cameras by adopting a triangular ranging technology;
the second judging unit is used for judging whether first difference values between every two pieces of first distance information are smaller than a first set threshold value, and if so, the second determining unit is called to determine that the shot plane is parallel to the shooting plane; if not, calling a second determining unit to determine that the shot plane is not parallel to the shooting plane;
a reference information selecting unit for selecting a first distance information as a reference distance information;
and a second adjusting unit for generating an adjustment instruction to adjust all remaining first distance information to the reference distance information so that both the subject plane and the photographing plane reach parallel positions.
The reference information selecting unit is used for selecting the maximum value in all the first distance information as reference distance information so as to ensure that the vertical incidence point of the camera is always in the text range in the adjusting process; of course, other distance information may be selected as the reference distance information according to the actual area requirement.
The following is a detailed description with reference to examples:
as shown in fig. 6, L1, L2, and L3 respectively represent vertical distances from three second image capturing cameras to a captured text, and when the vertical distances between each two of L1, L2, and L3 are equal or have a difference smaller than a certain threshold, it is determined that a captured plane is parallel to a capturing plane; otherwise, the shooting plane is not parallel to the shooting plane.
As shown in fig. 7, at this time, it is necessary to automatically adjust the position between the subject plane and the shooting plane until the vertical distances between L1, L2, and L3 are all equal or the difference is less than a certain threshold. Specifically, as shown in fig. 8, a camera farthest from the vertical distance of the photographed text or document in the three shots is recognized as a reference distance, and this camera is recognized. Assuming that the camera 1 is the farthest distance, the position of the mobile phone is adjusted in the direction of the arrows of the cameras 2 and 3 in sequence (the adjustment direction can be marked in the view frame), and the adjustment is the same as or less than a certain error according to the distance between the cameras 2 and 3 and the camera 1 to the text.
In this embodiment, the shooting plane of module of making a video recording of real-time detection is parallel with the plane of being shot whether to in time adjust and correct in order to guarantee high-quality scanning effect when nonparallel, has promoted recognition rate and precision to the text content in the object of being shot among the shooting process, is convenient for to the correction or the tailor of scanning or shooting text later stage, can satisfy user's user demand better, has promoted user's use and has experienced.
Example 6
The scanning control system of the present embodiment is a further improvement of embodiment 5, specifically:
the scanning control system of the embodiment is suitable for the situation that the size of an object to be scanned, such as a certificate or a text, is too large, at the moment, the shot text is not clear due to the fact that the vertical distance between the intelligent terminal and the text is too far, and segmented scanning, distance measurement and later-stage image splicing are needed to complete scanning of the oversized text.
The camera module of this embodiment also includes the TOF camera for gather or record the shooting distance between the plane of being shot and the shooting plane, so that correct the super large text.
As shown in fig. 16, the system of the present embodiment further includes:
the distance information acquisition module 6 is used for acquiring second distance information between the plurality of image acquisition cameras and the object to be scanned by adopting the TOF cameras;
the text size determining module 7 is configured to determine that the size of the object to be scanned is larger than a set size when the imaging range does not cover the text range of the entire object to be scanned and the second distance information is larger than a second set threshold;
and the segmented scanning processing module 8 is used for performing segmented scanning processing on the object to be scanned to acquire a target scanning text image.
Specifically, the segmentation scan processing module 8 includes:
a scanning size presetting unit for presetting a scanning size of the segment scanning;
the sectional scanning image acquisition unit is used for controlling the camera module to perform sectional scanning processing on an object to be scanned according to the scanning size and the set scanning path so as to acquire a plurality of sectional scanning images;
and the target image acquisition unit is used for acquiring the time sequence information corresponding to each segmented scanning image and splicing all the segmented scanning images according to the time sequence information to acquire a target scanning text image.
The segmented scanning image acquisition unit includes:
the distance information acquisition subunit is used for acquiring third distance information between the plurality of image acquisition cameras and the object to be scanned when each sectional scanning image is acquired by adopting the TOF camera;
a reserved registration area obtaining subunit, configured to obtain a reserved registration area corresponding to each segmented scanned image;
and the segmented scanning subunit is used for determining that segmented scanning is successful and continuously acquiring the next segmented scanned image until all segmented scanned images corresponding to the object to be scanned are acquired when the coincidence degree of the reserved registration areas in the current segmented scanned image and the previous segmented scanned image meets a third set threshold and a second difference value between third distance information corresponding to the two segmented scanned images is smaller than a fourth set threshold.
When the segmentation scan fails, the system of this embodiment further includes:
the zooming conversion processing module is used for carrying out zooming conversion processing on the current segmented scanning image according to the second difference value so as to zoom the adjacent segmented scanning images to the same text zooming ratio;
and the translation and rotation processing module is used for performing translation and rotation processing on the zoomed current segmented scanned image until the coincidence degree of the reserved registration area in the current segmented scanned image and the previous segmented scanned image meets a third set threshold value.
The segmentation scan processing module 8 of this embodiment further includes:
the line feed scanning control unit is used for controlling line feed scanning when the object to be scanned corresponds to a plurality of lines of transverse segmented scanning images and the current segmented scanning image reaches the transverse text boundary of the object to be scanned;
and the column-changing scanning control unit is used for controlling column-changing scanning when the object to be scanned corresponds to a plurality of columns of longitudinal segmented scanning images and the current segmented scanning image reaches the text longitudinal boundary of the object to be scanned.
The reserved registration area corresponds to a transverse reserved registration area and a longitudinal reserved registration area corresponding to the joint of each segmented scanning image and other segmented scanning images.
In the following, reference is made to embodiment 3 for the basic implementation principle of the transverse segment scanning and the longitudinal segment scanning, which is not described herein again.
The oversized text scanning method in the embodiment has the advantages that the corresponding scanning quality is high, the later correction speed is high, the oversized text scanning method is fully automatically generated during shooting, manual correction is not needed, and the correction efficiency and accuracy are guaranteed; in addition, in the process of registering images, two complete images do not need to be brought into registration, but partial registration areas of two adjacent segmented scanning images are registered, and only translation and rotation variables need to be obtained through registration, so that the registration speed is improved.
In the embodiment, when the camera module in the intelligent terminal is used for shooting and scanning, whether a shooting plane of the camera module is parallel to a shot plane (such as a plane where a text or a certificate is located) is detected in real time, and when the shooting plane is not parallel to the shot plane, the shooting plane and the shot plane are adjusted and corrected in time until the two planes reach the same level, so that a high-quality scanning effect is ensured, the recognition rate and the precision of text contents in a shot object in the shooting process are improved, and the later correction or cutting of the scanned or shot text is facilitated; when the size of the scanned text is too large, segmented scanning is adopted to obtain a plurality of segmented scanning images, all the segmented scanning images are spliced to obtain a target scanned text image, the effects of accurate scanning and later-stage restoration of the oversized text are achieved, the use requirements of users are better met, and the use experience of the users is improved.
Example 7
Fig. 17 is a schematic structural diagram of an electronic device according to embodiment 7 of the present invention. The electronic device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the scanning control method in any of the embodiments 1 or 2 when executing the program. The electronic device 30 shown in fig. 17 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 17, the electronic device 30 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, and a bus 33 connecting the various system components (including the memory 32 and the processor 31).
The bus 33 includes a data bus, an address bus, and a control bus.
The memory 32 may include volatile memory, such as Random Access Memory (RAM)321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
The processor 31 executes various functional applications and data processing, such as the scan control method in any of embodiments 1 or 2 of the present invention, by running a computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through input/output (I/O) interfaces 35. Also, the electronic device 30 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 36. As shown in FIG. 17, network adapter 36 communicates with the other modules of electronic device 30 via bus 33. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 30, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 8
The present embodiment provides a computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the steps in the scan control method in any of embodiments 1 or 2.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation, the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps of implementing the scan control method in any of embodiments 1 or 2 when the program product is run on the terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.
Claims (16)
1. A scanning control method is applied to an intelligent terminal provided with a camera module, and comprises the following steps:
acquiring a shot plane corresponding to an object to be scanned and a shooting plane corresponding to the camera module;
and judging whether the shot plane and the shooting plane are parallel, if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to be parallel.
2. The scan control method of claim 1, wherein the step of determining whether the object plane and the capture plane are parallel to each other, and if not, generating an adjustment instruction to adjust both the object plane and the capture plane to parallel positions comprises:
acquiring a text image corresponding to the object to be scanned;
extracting boundary shape information corresponding to a text area in the text image;
judging whether the boundary shape information meets reference rectangle information or not, and if so, determining that the shot plane is parallel to the shooting plane;
if not, determining that the shot plane is not parallel to the shooting plane, carrying out perspective transformation on the boundary shape information to the reference rectangle information to obtain transformation parameters, and adjusting the shot plane and the shooting plane to be parallel to each other according to the transformation parameters.
3. The scan control method of claim 1, wherein the camera module comprises a first image capturing camera and three second image capturing cameras disposed around the first image capturing camera;
the step of judging whether the shot plane and the shooting plane are parallel or not, and if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to parallel positions comprises the following steps:
acquiring first distance information between the shooting plane and the shot plane where the three second image acquisition cameras are located by adopting a triangular ranging technology based on the first image acquisition cameras and the second image acquisition cameras;
judging whether first difference values between every two pieces of first distance information are smaller than a first set threshold value, and if so, determining that the shot plane is parallel to the shooting plane;
if not, determining that the shot plane is not parallel to the shooting plane, selecting one piece of first distance information as reference distance information, and generating an adjusting instruction to adjust all remaining first distance information to the reference distance information so as to enable the shot plane and the shooting plane to reach parallel positions.
4. The scan control method of claim 3, wherein the step of selecting one of the distance information as the reference distance information comprises:
and selecting the maximum value in all the first distance information as the reference distance information.
5. The scan control method according to claim 1 or 2, wherein the camera module comprises N image capturing cameras, N is greater than or equal to 3 and is an integer;
the step of judging whether the shooting plane is parallel to the shooting plane further comprises the following steps:
judging whether the marked position information corresponding to the vertical incidence point of each image acquisition camera is in the text range of the object to be scanned, if so, executing the step of judging whether the shot plane is parallel to the shooting plane;
and if not, controlling the camera module to move in the horizontal direction until the marking position information corresponding to the vertical incidence point of each image acquisition camera falls into the text range of the object to be scanned.
6. The scan control method of claim 5, further comprising:
acquiring an imaging range of the camera module;
and judging whether the imaging range covers the whole text range of the object to be scanned, if not, controlling the camera module to move in the vertical direction until the imaging range covers the whole text range of the object to be scanned.
7. The scan control method of claim 6, wherein the camera module further comprises a TOF camera;
the method further comprises the following steps:
acquiring second distance information between the image acquisition cameras and the object to be scanned by adopting the TOF camera;
and when the imaging range does not cover the whole text range of the object to be scanned and the second distance information is larger than a second set threshold, determining that the size of the object to be scanned is larger than a set size, and performing segmented scanning on the object to be scanned to acquire a target scanning text image.
8. The scan control method of claim 7, wherein the step of performing a segmented scan process on the object to be scanned to obtain a target scan text image comprises:
presetting a scanning size of segmented scanning;
controlling the camera module to perform segmented scanning processing on the object to be scanned according to the scanning size and the set scanning path so as to obtain a plurality of segmented scanning images;
and acquiring time sequence information corresponding to each segmented scanning image, and splicing all the segmented scanning images according to the time sequence information to acquire a target scanning text image.
9. The scan control method according to claim 8, wherein the step of performing a segmented scan process on the object to be scanned according to the scan size and the set scan path to obtain a plurality of segmented scan images comprises:
acquiring third distance information between the plurality of image acquisition cameras and the object to be scanned when each sectional scanning image is acquired by adopting the TOF camera;
acquiring a reserved registration area corresponding to each segmented scanning image;
and when the coincidence degree of the reserved registration area in the current segmented scanning image and the previous segmented scanning image meets a third set threshold value and a second difference value between third distance information corresponding to the two segmented scanning images is smaller than a fourth set threshold value, determining that the segmented scanning is successful, and continuously acquiring the next segmented scanning image until all segmented scanning images corresponding to the object to be scanned are acquired.
10. The scan control method of claim 9, wherein when the segment scan fails, the method further comprises:
carrying out scaling transformation processing on the current segmented scanning image according to the second difference value so as to scale the adjacent segmented scanning images to the same text scaling;
and performing translation and rotation processing on the current segmented scanned image after the zooming processing until the coincidence degree of the reserved registration area in the current segmented scanned image and the previous segmented scanned image meets the third set threshold value.
11. The scan control method of claim 9, wherein the method further comprises:
when the object to be scanned corresponds to a plurality of lines of transverse segmented scanning images and the current segmented scanning image reaches the transverse text boundary of the object to be scanned, controlling line feed scanning;
and when the object to be scanned corresponds to a plurality of columns of longitudinal segmented scanning images and the current segmented scanning image reaches the text longitudinal boundary of the object to be scanned, controlling the column change scanning.
12. The scan control method of any one of claims 9 to 11, wherein the reserved registration region corresponds to a lateral reserved registration region and a longitudinal reserved registration region corresponding to where each segmented scanned image joins other segmented scanned images.
13. The utility model provides a scanning control system, its characterized in that, scanning control system is applied to in the intelligent terminal who is equipped with the module of making a video recording, the system includes:
the plane acquisition module is used for acquiring a shot plane corresponding to an object to be scanned and a shooting plane corresponding to the camera module;
and the control module is used for judging whether the shot plane and the shooting plane are parallel or not, and if not, generating an adjusting instruction to adjust the shot plane and the shooting plane to be parallel.
14. An intelligent terminal, characterized in that the intelligent terminal comprises the scanning control system of claim 13.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the scan control method of any of claims 1-12 when executing the computer program.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the scan control method of any one of claims 1 to 12.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110661393.0A CN113556438B (en) | 2021-06-15 | 2021-06-15 | Scanning control method, system, electronic device and storage medium |
CN202211261015.4A CN115550517A (en) | 2021-06-15 | 2021-06-15 | Scanning control method, system, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110661393.0A CN113556438B (en) | 2021-06-15 | 2021-06-15 | Scanning control method, system, electronic device and storage medium |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211261015.4A Division CN115550517A (en) | 2021-06-15 | 2021-06-15 | Scanning control method, system, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113556438A true CN113556438A (en) | 2021-10-26 |
CN113556438B CN113556438B (en) | 2022-09-27 |
Family
ID=78102075
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110661393.0A Active CN113556438B (en) | 2021-06-15 | 2021-06-15 | Scanning control method, system, electronic device and storage medium |
CN202211261015.4A Pending CN115550517A (en) | 2021-06-15 | 2021-06-15 | Scanning control method, system, electronic device and storage medium |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211261015.4A Pending CN115550517A (en) | 2021-06-15 | 2021-06-15 | Scanning control method, system, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113556438B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563118A (en) * | 2023-07-12 | 2023-08-08 | 浙江华诺康科技有限公司 | Endoscopic image stitching method and device and computer equipment |
CN117499549B (en) * | 2023-12-25 | 2024-05-14 | 荣耀终端有限公司 | Scanning method and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105791660A (en) * | 2014-12-22 | 2016-07-20 | 中兴通讯股份有限公司 | Method and device for correcting photographing inclination of photographed object and mobile terminal |
US20180360408A1 (en) * | 2017-06-15 | 2018-12-20 | Shanghai United Imaging Healthcare Co., Ltd. | Imaging systems and methods thereof |
CN110213456A (en) * | 2019-06-10 | 2019-09-06 | Oppo广东移动通信有限公司 | Scan document bearing calibration, electronic device and computer readable storage medium |
CN112272267A (en) * | 2020-10-22 | 2021-01-26 | Oppo广东移动通信有限公司 | Shooting control method, shooting control device and electronic equipment |
-
2021
- 2021-06-15 CN CN202110661393.0A patent/CN113556438B/en active Active
- 2021-06-15 CN CN202211261015.4A patent/CN115550517A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105791660A (en) * | 2014-12-22 | 2016-07-20 | 中兴通讯股份有限公司 | Method and device for correcting photographing inclination of photographed object and mobile terminal |
US20180360408A1 (en) * | 2017-06-15 | 2018-12-20 | Shanghai United Imaging Healthcare Co., Ltd. | Imaging systems and methods thereof |
CN110213456A (en) * | 2019-06-10 | 2019-09-06 | Oppo广东移动通信有限公司 | Scan document bearing calibration, electronic device and computer readable storage medium |
CN112272267A (en) * | 2020-10-22 | 2021-01-26 | Oppo广东移动通信有限公司 | Shooting control method, shooting control device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115550517A (en) | 2022-12-30 |
CN113556438B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018201809A1 (en) | Double cameras-based image processing device and method | |
CN101416219B (en) | Foreground/background segmentation in digital images | |
US10540806B2 (en) | Systems and methods for depth-assisted perspective distortion correction | |
US10269130B2 (en) | Methods and apparatus for control of light field capture object distance adjustment range via adjusting bending degree of sensor imaging zone | |
US8331619B2 (en) | Image processing apparatus and image processing method | |
AU2007224085B2 (en) | Model- based dewarping method and apparatus | |
CN113556438B (en) | Scanning control method, system, electronic device and storage medium | |
EP3598385B1 (en) | Face deblurring method and device | |
WO2022105415A1 (en) | Method, apparatus and system for acquiring key frame image, and three-dimensional reconstruction method | |
CN111160172A (en) | Parking space detection method and device, computer equipment and storage medium | |
CN110490067B (en) | Face recognition method and device based on face posture | |
CN112511767B (en) | Video splicing method and device, and storage medium | |
US20230041573A1 (en) | Image processing method and apparatus, computer device and storage medium | |
US20190208131A1 (en) | Focus Window Determining Method, Apparatus, and Device | |
JP4348028B2 (en) | Image processing method, image processing apparatus, imaging apparatus, and computer program | |
US9995905B2 (en) | Method for creating a camera capture effect from user space in a camera capture system | |
JP2009047498A (en) | Stereoscopic imaging device, control method of stereoscopic imaging device, and program | |
CN106131498B (en) | Panoramic video joining method and device | |
CN110930437A (en) | Target tracking method and device | |
CN111630569B (en) | Binocular matching method, visual imaging device and device with storage function | |
US11934089B2 (en) | Bidirectional compensation method and apparatus for projection thermal defocusing, and readable storage medium | |
CN113034345B (en) | Face recognition method and system based on SFM reconstruction | |
JP2008077489A (en) | Image processor, method, program, and storage medium | |
WO2024120107A1 (en) | Method for acquiring foreground contour of text, method for acquiring watermark image, system, apparatus and medium | |
CN118537940A (en) | Attendance checking method, device, equipment and storage medium based on face recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |