CN113810604A - Document shooting method and device - Google Patents

Document shooting method and device Download PDF

Info

Publication number
CN113810604A
CN113810604A CN202110926928.2A CN202110926928A CN113810604A CN 113810604 A CN113810604 A CN 113810604A CN 202110926928 A CN202110926928 A CN 202110926928A CN 113810604 A CN113810604 A CN 113810604A
Authority
CN
China
Prior art keywords
image
document
control
terminal equipment
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110926928.2A
Other languages
Chinese (zh)
Other versions
CN113810604B (en
Inventor
付庆涛
张祎
孙力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110926928.2A priority Critical patent/CN113810604B/en
Publication of CN113810604A publication Critical patent/CN113810604A/en
Application granted granted Critical
Publication of CN113810604B publication Critical patent/CN113810604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Abstract

The embodiment of the application provides a document shooting method and a document shooting device, which relate to the technical field of terminals, and the method comprises the following steps: the method comprises the steps that terminal equipment displays a first interface, wherein the first interface comprises a control used for document shooting and a first preview image; the terminal equipment pushes the lens to the position indicated by the focusing parameters by using a motor; the focusing parameter is a preset constant; the terminal equipment starts focusing from the position indicated by the focusing parameter; the method comprises the steps that terminal equipment receives operation on a control used for document shooting; and responding to the operation of the control for document shooting, and acquiring a first image by the terminal equipment when focusing is completed. Therefore, the terminal equipment can set a proper focal length according to the habit of shooting the document by the user when receiving the operation of starting the document shooting function by the user, so that the terminal equipment can realize quick focusing based on the proper focal length, and the focusing speed and the document shooting speed are improved.

Description

Document shooting method and device
Technical Field
The application relates to the technical field of terminals, in particular to a document shooting method and device.
Background
With the popularization and development of the internet, the functional requirements of people on terminal equipment are diversified. For example, in order to meet the use requirement of a user for viewing a document at any time in a terminal device, more terminal devices can support a document shooting function. For example, a user can take a document picture by using a document taking function, and realize functions such as searching questions on the internet or extracting characters in the picture.
Generally, when a document is shot by using a terminal device, the terminal device can perform focus search from infinity until a focus capable of shooting a clear document picture is searched, and then the document picture is shot based on the focus.
However, the document photographing method described above has a slow focusing speed, which affects the document photographing speed.
Disclosure of Invention
The embodiment of the application provides a document shooting method and device, which can set a proper focal length according to the habit of shooting a document by a user, and further can realize quick focusing based on the preset focal length when entering a document shooting function, so that the focusing speed and the shooting speed are improved.
In a first aspect, an embodiment of the present application provides a document shooting method, which is applied to a terminal device, where the terminal device includes a motor, and the motor is used to control a lens in a camera to move, and the method includes: the method comprises the steps that terminal equipment displays a first interface, wherein the first interface comprises a control used for document shooting and a first preview image; the terminal equipment pushes the lens to the position indicated by the focusing parameters by using a motor; the focusing parameter is a preset constant; the terminal equipment starts focusing from the position indicated by the focusing parameter; the method comprises the steps that terminal equipment receives operation on a control used for document shooting; in response to an operation of the control for document shooting, the terminal device acquires a first image when focusing is completed. Therefore, the terminal equipment can set a proper focal length according to the habit of shooting the document by the user when receiving the operation of starting the document shooting function by the user, so that the terminal equipment can realize quick focusing based on the proper focal length, and the focusing speed and the document shooting speed are improved.
The first interface may be an interface corresponding to a document shooting mode, and the operation of selecting the control for document shooting may include a click operation or a slide operation.
In a possible implementation manner, before the terminal device displays the first interface, the method further includes: the terminal equipment receives an operation of opening a first application; responding to the operation of opening the first application, and displaying a second interface by the terminal equipment; the second interface comprises a first menu bar; the first menu bar comprises a control for starting a document shooting mode and a control for starting a shooting mode; the method comprises the steps that terminal equipment receives operation of selecting a control for starting a document shooting mode; the terminal equipment displays a first interface, and comprises: and responding to the operation of selecting the control for starting the document shooting mode, and displaying a first interface by the terminal equipment. In this way, the user can conveniently start a document shooting mode in the camera application.
Wherein the first application may be a camera application; the second interface may be understood as an interface corresponding to when the camera application is opened, for example, the second interface may be an interface corresponding to a photographing mode; the document shooting mode may be understood as a document shooting mode in the embodiment of the present application.
In one possible implementation, the method further includes: the terminal equipment receives the operation of selecting the control for starting the photographing mode; in response to the operation of selecting the control for starting the photographing mode, the terminal equipment pushes the lens to a second position by using the motor; the second position is different from the position indicated by the focusing parameter. Thus, in the photographing mode, the terminal device can start focusing from the second position; the focusing speed in the document-taking mode is faster than the focusing in the document-taking mode from the position indicated by the focusing parameter.
Wherein the second position may be understood as an infinite distance indicated in the camera of the terminal device.
In one possible implementation, the method further includes: and the terminal equipment processes the document content in the first image to obtain a second image. Therefore, the terminal equipment can obtain the processed clearer document picture.
In a possible implementation manner, the first interface further includes a control for implementing the multi-shot mode, and when the control for implementing the multi-shot mode is not selected, the processing, by the terminal device, of the document content in the first image includes: the terminal equipment displays a third interface; wherein the third interface comprises one or more of the following: a third image, a control for document rectification, a control for saving the third image, or a control for deleting the second image; the third image is a part or all of the second image; in a third interface, a third image is overlaid on the preview image corresponding to the first image; the terminal equipment receives the operation of selecting the control for saving the third image; in response to the operation of selecting the control for saving the third image, the terminal device corrects the third image and saves the third image to the second application. Therefore, a user can select a proper document shooting mode according to the self requirement, for example, when the user shoots a single document, a single document picture can be shot and document adjustment can be carried out in time, so that a proper document picture can be obtained.
The third interface may be a page for implementing manual document rectification by a user, and the second application may be a gallery application.
In one possible implementation, the method further includes: the terminal equipment receives the operation of selecting a control for document correction; in response to an operation of selecting a control for document rectification, the screen size of the third image is in an editable state; the terminal equipment receives an operation aiming at the third image; in response to the operation for the third image, the terminal device performs document rectification on the screen-size-processed document in the third image. Therefore, the terminal equipment can realize manual document correction of the user, and further can improve the accuracy of document correction.
The editable state may be understood as a state in which the shape adjustment of the document selection box is possible.
In a possible implementation manner, the first interface further includes a control for implementing the multi-shot mode, and when the control for implementing the multi-shot mode is selected, the terminal device processes document content in the first image to obtain a second image, including: the terminal equipment processes the document content in the first image to obtain a second image; the method further comprises the following steps: the terminal equipment stores the second image and switches to a fourth interface; the fourth interface comprises a control used for document shooting and a second preview image. Therefore, the user can select a proper document shooting mode according to the self requirement, for example, when the user shoots a plurality of documents, the rapid shooting of a plurality of document pictures can be realized based on the multi-shooting mode.
Wherein, the fourth interface can be understood as an interface corresponding to the next document shooting.
In one possible implementation manner, the method further includes: the terminal equipment receives the operation of selecting the second application; responding to the operation of selecting the second application, and displaying a fifth interface by the terminal equipment; the fifth interface comprises the identifier of the second image; the terminal equipment receives the operation of selecting the identifier of the second image; responding to the operation of selecting the identifier of the second image, and displaying a sixth interface by the terminal equipment; the sixth interface comprises a second image and a control for document rectification of the second image; the terminal equipment receives the operation of selecting a control for document correction of the second image; responding to the operation of selecting the control for document rectification of the second image, and displaying a seventh interface by the terminal equipment; wherein the seventh interface comprises one or more of: a first document selection box for selecting some or all of the documents in the second image, a control for saving the documents in the first document selection box, a control for deleting the second image, or a control for restoring the second image. Therefore, even if the document identification of the document possibly exists in the document picture obtained by the terminal device based on the document shooting mode, the user can select the document content needing to be identified through manual adjustment.
The fifth interface can understand the corresponding interface when the gallery application is opened; the sixth interface may be an interface corresponding to the triggering of the display of the second image, and the seventh interface may be an interface for performing manual document correction on the second image.
In a possible implementation manner, the terminal device further includes one or more of the following: the system comprises a camera sensor, an image signal processing ISP module, an image processing module, a document identification module or an image correction module; the method for processing the document content in the first image by the terminal device to obtain the second image includes: the terminal equipment utilizes the ISP module to process image data corresponding to the document content in the first image output by the camera sensor to obtain ISP processed image data; the terminal equipment utilizes the image processing module to perform image post-processing on the image data processed by the ISP to obtain the image data after the image processing; the terminal equipment performs document identification on the image data after the image processing by using a document identification module to obtain a target image after the document identification; and the terminal equipment utilizes the image correction module to perform document correction on the target image after the document identification to obtain a second image. Therefore, the terminal equipment can obtain the processed clearer document picture.
In one possible implementation, the ISP-processed image data includes photo stream data and preview stream data, and the post-image processing includes one or more of: image noise reduction processing, high dynamic scene HDR image fusion processing or character definition improving processing; the method for processing the image data processed by the ISP by the terminal equipment by utilizing the image processing module to obtain the image data processed by the image comprises the following steps: the terminal equipment judges whether the preview stream data meets an HDR scene; when the terminal equipment determines that the preview stream data meets the HDR scene, the terminal equipment utilizes the image processing module to perform HDR image fusion processing and single-frame character definition improving processing on the multi-frame shot stream data with different exposure degrees; when the terminal equipment determines that the preview stream data does not meet the HDR scene, the terminal equipment utilizes the image processing module to perform multi-frame image noise reduction processing and single-frame character definition improvement processing on the shooting stream data with different multi-frame noise conditions. Therefore, the terminal equipment can obtain the document picture with better picture effect according to different scenes.
In a possible implementation manner, the first interface further includes a control for automatically turning on the flash, and the method further includes: the terminal equipment receives the operation of selecting a control for automatically starting a flash lamp; responding to the operation of selecting the control for automatically starting the flash lamp, and displaying an eighth interface by the terminal equipment; wherein, the eighth interface comprises one or more of the following: a control for automatically turning on the flash, a control for turning off the flash, a control for turning on the flash, or a control for normally on the flash, which are displayed in a highlighted form. Therefore, the user can flexibly select the working mode of the flash lamp according to the shooting requirement.
In one possible implementation manner, the acquiring, by the terminal device, the first image when focusing is completed includes: the terminal equipment starts a flash lamp and acquires a first image when focusing is completed and the brightness of the picture is detected to be lower than a brightness threshold value. Therefore, under the condition of triggering the control for automatically starting the flash lamp, the terminal equipment can flexibly control the flash lamp to be started and closed according to the actual shooting condition during shooting, and further a document picture with a better picture effect can be obtained.
In one possible implementation, the method further includes: responding to the operation of selecting the control for normally lighting the flash lamp, and lighting the flash lamp by the terminal equipment; in response to an operation of a control for document shooting, the terminal device acquires a first image in a state where a flash is normally on when focusing is completed. Therefore, no matter what document shooting scene is currently in, the terminal equipment can obtain a document picture with a better picture effect based on the normally-on flash lamp.
After the shooting of the first image is finished, the flash lamp is always in a normally-on state.
In one possible implementation, the method further includes: when the terminal device does not receive the operation of the control for turning off the flash lamp, the control for turning on the flash lamp or the control for normally turning on the flash lamp within the first time threshold, the terminal device switches to the first interface. In this way, the terminal device may prompt the user that no change is currently made to the flash based on the action of switching to the first interface.
In a possible implementation manner, the first interface further includes a second document selection box for framing the document detected by the terminal device, and the method further includes: when the terminal equipment detects that the proportion of the second document selection frame in the picture is smaller than a first threshold value, the terminal equipment displays prompt information; the prompt message is used for indicating that the terminal equipment is close to the document for shooting. Therefore, in the document shooting process, even if the distance between the user and the document is long, the user can be close to the document shooting based on the prompt information displayed on the terminal equipment, and the terminal equipment can identify and obtain a clearer document picture.
In a possible implementation manner, the ISP module is included in the terminal device, and the terminal device pushes the lens to a position indicated by the focusing parameter by using a motor, including: the terminal equipment sends the effect configuration parameters to the ISP module and pushes the lens to the position indicated by the focusing parameters by using the motor; the effect configuration parameters include focusing parameters.
In one possible implementation, the effect configuration parameters further include one or more of the following: a camera sensor map parameter, an exposure parameter, a contrast parameter, a sharpness parameter, or a sharpness parameter.
In one possible implementation manner, a plurality of sets of corresponding relations are stored in the terminal device, wherein one set of corresponding relations is used for indicating the relation between the focusing parameters and the shooting distance; the terminal equipment pushes the lens to the position indicated by the focusing parameter by using a motor, and comprises the following steps: the terminal equipment determines the shooting distance between the terminal equipment and the shot document; and the terminal equipment determines a focusing parameter corresponding to the shooting distance from the corresponding relation, and pushes the lens to a position indicated by the focusing parameter by using a motor. Therefore, the terminal equipment can realize rapid focusing based on focusing parameters in the corresponding relation when the document is shot at different distances, and further improve the document shooting speed.
Optionally, the corresponding relationship may also be stored in the server, and then the terminal device may send a request to the server to search for the corresponding relationship.
In one possible implementation, the shooting distance is 30-40 centimeters.
In one possible implementation, the first application is a camera application.
In a second aspect, an embodiment of the present application provides a document shooting device, where the device includes a motor, the motor is used to control a lens in a camera to move, a display unit is used to display a first interface, and the first interface includes a control for shooting a document and a first preview image; the processing unit is used for pushing the lens to a position indicated by the focusing parameters by using the motor; the focusing parameter is a preset constant; a processing unit, further used for starting focusing from the position indicated by the focusing parameter; the processing unit is also used for receiving the operation of the control for document shooting; the processing unit is further configured to acquire a first image when focusing is completed in response to an operation of the control for document shooting.
In a possible implementation manner, the processing unit is further configured to receive an operation of opening the first application; the display unit is used for responding to the operation of opening the first application and displaying a second interface; the second interface comprises a first menu bar; the first menu bar comprises a control for starting a document shooting mode and a control for starting a shooting mode; the processing unit is also used for selecting the operation of a control for starting a document shooting mode; and the display unit is also used for displaying the first interface in response to the operation of selecting the control for starting the document shooting mode.
In a possible implementation manner, the processing unit is further configured to receive an operation of selecting a control for starting a photographing mode; the processing unit is used for responding to the operation of selecting the control for starting the photographing mode and pushing the lens to the second position by using the motor; the second position is different from the position indicated by the focusing parameter.
In a possible implementation manner, the processing unit is further configured to process document content in the first image to obtain the second image.
In a possible implementation manner, the first interface further includes a control for implementing the multi-beat mode, and when the control for implementing the multi-beat mode is not selected, the display unit is specifically configured to display a third interface; wherein the third interface comprises one or more of the following: a third image, a control for document rectification, a control for saving the third image, or a control for deleting the second image; the third image is a part or all of the second image; in a third interface, a third image is overlaid on the preview image corresponding to the first image; the processing unit is specifically used for selecting the operation of a control for saving the third image; in response to the operation of selecting the control for saving the third image, the processing unit is further specifically configured to rectify the third image and save the third image to the second application.
In a possible implementation manner, the processing unit is further configured to receive an operation of selecting a control for document rectification; in response to an operation of selecting a control for document rectification, the screen size of the third image is in an editable state; a processing unit further configured to receive an operation for a third image; and in response to the operation on the third image, the processing unit is also used for carrying out document rectification on the document subjected to the picture size processing in the third image.
In a possible implementation manner, the first interface further includes a control for implementing the multi-shot mode, and when the control for implementing the multi-shot mode is selected, the processing unit is specifically configured to process document content in the first image to obtain a second image; the processing unit is further specifically used for storing the second image and switching to a fourth interface; the fourth interface comprises a control used for document shooting and a second preview image.
In a possible implementation, the processing unit is further configured to receive an operation of selecting the second application; the display unit is used for responding to the operation of selecting the second application and displaying a fifth interface; the fifth interface comprises the identifier of the second image; the processing unit is also used for receiving the operation of selecting the identifier of the second image; the display unit is used for responding to the operation of selecting the identifier of the second image and displaying a sixth interface; the sixth interface comprises a second image and a control for document rectification of the second image; the processing unit is further used for receiving operation of selecting a control for document rectification on the second image; the display unit is used for responding to the operation of selecting the control for carrying out document rectification on the second image and displaying a seventh interface; wherein the seventh interface comprises one or more of: a first document selection box for selecting some or all of the documents in the second image, a control for saving the documents in the first document selection box, a control for deleting the second image, or a control for restoring the second image.
In a possible implementation manner, the apparatus further includes one or more of the following: the system comprises a camera sensor, an image signal processing ISP module, an image processing module, a document identification module or an image correction module; a processing unit, specifically configured to: processing image data corresponding to document content in a first image output by a camera sensor by using an ISP module to obtain ISP-processed image data; carrying out image post-processing on the image data processed by the ISP by using an image processing module to obtain image data after image processing; carrying out document identification on the image data after the image processing by using a document identification module to obtain a target image after the document identification; and carrying out document correction on the target image after the document identification by using an image correction module to obtain a second image.
In one possible implementation, the ISP-processed image data includes photo stream data and preview stream data, and the post-image processing includes one or more of: image noise reduction processing, high dynamic scene HDR image fusion processing or character definition improving processing; a processing unit, specifically configured to: judging whether the preview stream data meets an HDR scene; when the terminal equipment determines that the preview stream data meets the HDR scene, the image processing module is utilized to perform HDR image fusion processing and single-frame character definition improving processing on the multi-frame photo-taking stream data with different exposure degrees; when the terminal equipment determines that the preview stream data does not meet the HDR scene, the image processing module is utilized to carry out multi-frame image noise reduction processing and single-frame character definition improvement processing on the shooting stream data with different multi-frame noise conditions.
In a possible implementation manner, the first interface further includes a control for automatically turning on the flash lamp, and the processing unit is further configured to receive an operation of selecting the control for automatically turning on the flash lamp; the display unit is used for responding to the operation of the control piece which is selected to automatically start the flash lamp and displaying an eighth interface; wherein, the eighth interface comprises one or more of the following: a control for automatically turning on the flash, a control for turning off the flash, a control for turning on the flash, or a control for normally on the flash, which are displayed in a highlighted form.
In one possible implementation, the processing unit is further configured to: in response to the operation of the control member for normally lighting the flash lamp, lighting the flash lamp; in response to an operation of a control for document photographing, a first image is acquired in a state where a flash is normally on when focusing is completed.
In a possible implementation manner, when the control for the normally-on flash is selected, the processing unit is specifically configured to: upon completion of focusing, a first image is acquired based on the flash in a normally on state.
In a possible implementation manner, when the terminal device does not receive an operation for a control for turning off a flash, a control for turning on the flash, or a control for constantly turning on the flash within a first time threshold, the processing unit is specifically configured to switch to the first interface.
In a possible implementation manner, the first interface further includes a second document selection box for framing the document detected by the terminal device, and when the terminal device detects that the proportion of the second document selection box in the picture is smaller than a first threshold, the display unit is further configured to display the prompt information; the prompt message is used for indicating that the terminal equipment is close to the document for shooting.
In a possible implementation manner, the terminal device includes an ISP module, and the processing unit is specifically configured to send the effect configuration parameter to the ISP module, and push the lens to a position indicated by the focusing parameter by using a motor; the effect configuration parameters include focusing parameters.
In one possible implementation, the effect configuration parameters further include one or more of the following: a camera sensor map parameter, an exposure parameter, a contrast parameter, a sharpness parameter, or a sharpness parameter.
In one possible implementation manner, a plurality of sets of corresponding relations are stored in the device, wherein one set of corresponding relations is used for indicating the relation between the focusing parameters and the shooting distance; a processing unit, specifically configured to: determining a shooting distance between the terminal equipment and a shot document; and determining a focusing parameter corresponding to the shooting distance from the corresponding relation, and pushing the lens to a position indicated by the focusing parameter by using a motor.
In one possible implementation, the shooting distance is 30-40 centimeters.
In one possible implementation, the first application is a camera application.
In a third aspect, an embodiment of the present application provides a document shooting device, including a processor and a memory, where the memory is used to store code instructions; the processor is configured to execute the code instructions to cause the electronic device to perform a document capturing method as described in the first aspect or any implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium storing instructions that, when executed, cause a computer to perform a document shooting method as described in the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present application provides a computer program product, which includes a computer program and when the computer program is executed, causes a computer to execute the document shooting method as described in the first aspect or any implementation manner of the first aspect.
It should be understood that the second aspect to the fifth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the beneficial effects achieved by the aspects and the corresponding possible implementations are similar and will not be described again.
Drawings
Fig. 1 is a schematic view of a scenario provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface for starting a document shooting function according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present disclosure;
FIG. 4 is a schematic interface diagram of a document shooting mode provided in an embodiment of the present application;
fig. 5 is a schematic interface diagram for implementing a multi-beat mode according to an embodiment of the present disclosure;
fig. 6 is a schematic interface diagram for implementing a single-shot mode according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an interface for prompting a user according to an embodiment of the present application;
FIG. 8 is a schematic view of an interface for document rectification provided by an embodiment of the present application;
FIG. 9 is a schematic flowchart of document shooting according to an embodiment of the present disclosure;
fig. 10 is a schematic flowchart of an image post-processing method according to an embodiment of the present application;
FIG. 11 is a flowchart illustrating a document shooting method according to an embodiment of the present disclosure;
FIG. 12 is a schematic structural diagram of a document camera according to an embodiment of the present disclosure;
fig. 13 is a schematic hardware structure diagram of a control device according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same or similar items having substantially the same function and action. For example, the first value and the second value are only used to distinguish different values, and the order of the values is not limited. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
It is noted that, in the present application, words such as "exemplary" or "for example" are used to mean exemplary, illustrative, or descriptive. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein a, b and c can be single or multiple.
With the popularization of terminal devices, a document shooting function in the terminal device is also becoming one of the functions commonly used in daily life of people. For example, the user may photograph a file, a slide, or the like based on a document photographing function of the terminal device. In the document shooting process, even though the situation that the shot picture is inclined because the shooting angle is not positive when the user shoots the document can occur, the terminal equipment can process the inclined picture into the front-looking picture based on the document shooting function, and the document shooting effect is improved.
Exemplarily, fig. 1 is a schematic view of a scenario provided in an embodiment of the present application. As shown in fig. 1, this scenario may include: a terminal device 101 having a document shooting function, for example, the terminal device 101 may be a tablet computer (or simply a tablet), and a document screen 102 shot by the terminal device.
Further, in a possible implementation manner, fig. 2 is an interface schematic diagram for starting a document shooting function according to an embodiment of the present application. In general, the terminal device 101 may turn on a document photographing function based on the embodiment corresponding to fig. 2 and photograph the document screen 102 using the document photographing function. The document screen 102 may be a document recorded in the notepad shown in c in fig. 2.
When the terminal device 101 receives an operation of opening the camera application by the user, the terminal device 101 may display an interface as shown in a in fig. 2, where the interface may include a plurality of function controls in the one-level menu 200 of the camera application, for example: a photo control, a video control, a professional control, or a more control 203 for starting more functions in the camera application, etc., which may also include one or more of the following, for example: a photographing control 201, a control 202 for opening a gallery, a control for opening an Artificial Intelligence (AI) photographing function, a flash control for setting flash on or off, a setting control for setting a camera application, or a control for adjusting a photographing multiple, and the like. Wherein the control 202 for opening the gallery can be used to open the gallery application. The gallery application is an application for managing pictures on electronic devices such as smart phones and tablet computers, and may also be referred to as "albums," and this embodiment does not limit the name of the application. The gallery application may support a user performing various operations, such as browsing, editing, deleting, selecting, etc., on pictures stored on the terminal apparatus 101.
When the terminal device 101 receives an operation of triggering the more control 203 by the user, the terminal device 101 may display an interface as shown in b in fig. 2, as shown in a in fig. 2. An interface, shown as b in fig. 2, may include one or more of the following functionality controls, for example: a photo control, a video recording control, a professional control, a more control, a high-dynamic range (HDR) control, a slow motion control, a document correction control 204, a dynamic photo control, a micro-movie control, a download control for downloading more functions, an edit control for adjusting the position of each function in the more controls, or a detail control for viewing detail information of each function in the more controls, and the like. The document shooting function can be started by clicking the document correction control 204, and after shooting, the camera automatically identifies the text area in the view finding range and corrects the text area into a front view.
When the terminal device 101 receives an operation of triggering the document rectification control 204 by the user, the terminal device 101 may display an interface as shown in c in fig. 2, as shown in b in fig. 2. In the interface as shown in c in fig. 2, one or more of the following may be included in the interface, for example: the document shooting control 205, a control for opening a gallery, a flash control for setting flash on or off, a setting control for setting a camera application, a control for adjusting shooting multiples, a control for turning off document correction, text information, or the like. Such as the text information may be the document being detected.
Further, in the interface shown in c in fig. 2, in the case that the terminal device turns on the document correction function, the terminal device 101 may start the focus search from infinity until the focus is found, at which a clear document picture can be taken, for example, the terminal device may select the document frame 102 using the document selection box 206. As shown in the interface c in fig. 2, the document selection box 206 may frame the document screen 102 in the selected notepad, and the document screen 102 may include: 7/27, the content of the job recorded, such as completed task: 1. reciting 2 lessons, 2, completing 2 sets of contents of math test paper and the like.
In the interface shown in c in fig. 2, when the terminal device 101 receives an operation that the user triggers the document capturing control 205 in the document correction function, the terminal device 101 may capture the document screen 102 based on the searched focus, and process the document screen 102 by using an Image Signal Processing (ISP) module, a document recognition module, a document correction module, and the like, so as to obtain the document screen 102 after the document correction. In a possible implementation manner, the terminal device 101 may also perform document rectification again based on further picture adjustment of the document picture 102 after document rectification by the user until a document shooting result satisfactory to the user is obtained.
However, since the terminal device needs to start the focus search from infinity in the above-described document photographing method, the focusing speed thereof is slow, thereby affecting the document photographing speed. In addition, in the embodiment corresponding to fig. 2, the step of the user opening the document capturing function (e.g. document rectification in the embodiment corresponding to fig. 2) is cumbersome, and may affect the usage rate of the document capturing function.
In view of this, an embodiment of the present application provides a document shooting method, so that a terminal device may simplify a step of starting a document shooting function, and when an operation of starting the document shooting function by a user is received, a proper focal length is set according to a habit of shooting a document by the user, so that the terminal device may realize fast focusing based on the proper focal length, and a focusing speed and a document shooting speed are improved.
It is understood that the terminal device may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), etc. The terminal device may be a mobile phone (mobile phone) with a touch screen, a smart tv, a wearable device, a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in self-driving (self-driving), a wireless terminal in remote surgery (remote medical supply), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (smart security), a wireless terminal in city (smart city), a wireless terminal in smart home (smart home), and the like. The embodiment of the present application does not limit the specific technology and the specific device form adopted by the terminal device.
Therefore, in order to better understand the embodiments of the present application, the following describes the structure of the terminal device according to the embodiments of the present application. Exemplarily, fig. 3 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
The terminal device may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, an indicator 192, a camera 193, a display 194, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiments of the present application does not constitute a specific limitation to the terminal device. In other embodiments of the present application, a terminal device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. The different processing units may be separate devices or may be integrated into one or more processors. A memory may also be provided in processor 110 for storing instructions and data.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal device, and may also be used to transmit data between the terminal device and the peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. The power management module 141 is used for connecting the charging management module 140 and the processor 110.
The wireless communication function of the terminal device can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Antennas in terminal devices may be used to cover single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied on the terminal device. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation.
The wireless communication module 160 may provide a solution for wireless communication applied to a terminal device, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), and the like.
The terminal device realizes the display function through the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. In some embodiments, the terminal device may include 1 or N display screens 194, with N being a positive integer greater than 1.
The terminal device can realize the shooting function through the ISP module, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The camera 193 is used to capture still images or video. In some embodiments, the terminal device may include 1 or N cameras 193, N being a positive integer greater than 1. The camera 193 may be a front camera or a rear camera. The camera 193 may include a lens (lens) and a camera sensor (or referred to as a photosensitive element), and the photosensitive element may be any photosensitive device such as a charge-coupled device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS). In the embodiment of the present application, in the document shooting function of the camera application, the terminal device may shoot a document picture based on the camera 193, and obtain a document picture based on further image processing on the document picture.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the terminal device. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area.
The terminal device can implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The terminal device can listen to music through the speaker 170A, or listen to a handsfree call. The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the terminal device answers a call or voice information, it is possible to answer a voice by bringing the receiver 170B close to the human ear. The headphone interface 170D is used to connect a wired headphone. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The gyro sensor 180B may be used to determine the motion attitude of the terminal device. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E can detect the magnitude of acceleration of the terminal device in various directions (generally, three axes). A distance sensor 180F for measuring a distance. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The ambient light sensor 180L is used to sense the ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The temperature sensor 180J is used to detect temperature. The touch sensor 180K is also called a "touch device". The bone conduction sensor 180M may acquire a vibration signal. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, or "touch screen".
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The terminal device may receive a key input, and generate a key signal input related to user setting and function control of the terminal device. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The software system of the terminal device may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture, which is not described herein again.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following embodiments may be implemented independently or in combination, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Illustratively, the document photographing method may include the steps of:
illustratively, the user may turn on a document-taking mode in the camera application by the embodiment as shown in fig. 4.
In the embodiment of the present application, the camera application may be an application supported by a system of a terminal device, or the camera application may also be an application having a photographing function, and the like; the document shooting mode may be understood as a document shooting function, and the operation of starting the document shooting mode in the camera application may include: click operation, slide operation, and the like.
For example, fig. 4 is a schematic interface diagram of a mode for opening a document shooting mode according to an embodiment of the present application. In the embodiment corresponding to fig. 4, the terminal device is taken as a tablet, and the tablet takes vertical screen shooting as an example for illustration, and the example does not constitute a limitation on the embodiment of the present application. It is understood that the flat panel may also be photographed by using a landscape screen, and the specific photographing process and the interface display are similar to those described below and will not be described herein again.
When the tablet receives an operation of opening the camera application by the user, the tablet may display an interface as shown in a in fig. 4, which may include a plurality of functional controls in a one-level menu 400 of the camera application, such as: a document taking control 401, a photo taking control, a video recording control, a professional control, etc., and the interface may further include one or more of the following, for example: a shooting control 402, a control for opening a gallery, an automatic flash control 403, a multi-shot control 404, a setting control 405, a control for adjusting shooting multiples, a document selection box 406, or text information, etc. The document selection box 406 may be a rectangle, and the text information may be: the document is being detected, please keep the device level with the document. The automatic flash control 403 may be understood as that, in the document-shooting mode of the terminal device, the flash may default to an automatic state, i.e. when the ambient brightness is low, the flash will automatically flash. The document selection box 406 may be used to select a document frame identified by the terminal device, where the document frame is the same as the document frame shown in c in fig. 2, and is not described herein again.
It is understood that the shape, size, position, etc. of the document selection box 406 may include other contents according to actual scenarios, which are not limited in this embodiment of the present application.
In a possible implementation manner, when the tablet receives an operation of opening the camera application by a user, the tablet may display an interface corresponding to the photographing control in the first-level menu 400; further, when the tablet receives a rightward sliding of the user with respect to the primary menu 400 and switches to an interface corresponding to the document capturing control 401, or receives an operation of the user clicking the document capturing control 401 in the primary menu 400, the tablet may display an interface as shown in a in fig. 4.
In a possible implementation, in the interface shown as a in fig. 4, when the tablet receives an operation that the user triggers the multi-shot control 404, the tablet may support multiple shots of the shooting document mode. For example, the user may sequentially capture each page in the workbook using the document capturing mode, and the terminal device may store the document image corresponding to each page in the captured workbook in the gallery of the terminal device.
In a possible implementation, in the interface shown as a in fig. 4, when the tablet receives an operation that the user triggers the automatic flash control 403, the tablet may display the interface shown as b in fig. 4. An interface, shown as b in fig. 4, may include a plurality of controls for controlling the flash, such as: turn on the flash control 407, auto flash control 403, turn off the flash control 408, or turn on the flash control 409. In the interface shown in b in fig. 4, since the automatic flash control 403 is the default selected control, the automatic flash control may be highlighted in different colors or in a bolding manner, so that the user may distinguish the currently selected control from other unselected controls. It is understood that other contents displayed in the interface shown in b in fig. 4 are similar to those displayed in the interface shown in a in fig. 4, and are not described herein again.
In a possible implementation, when the tablet receives an operation that the user triggers the automatic flash control 403 in the interface shown as a in fig. 4, the tablet may display the interface shown as b in fig. 4; in the interface shown as b in fig. 4, when the tablet does not receive further user operations with respect to turning on the flash control 407, the automatic flash control 403, turning off the flash control 408, or the normally-on flash control 409 control within a time threshold, for example, 5 seconds, 10 seconds, etc., then the tablet may switch to the interface shown as a in fig. 4.
In a possible implementation manner, in the interface shown as b in fig. 4, after the tablet receives an operation of triggering the automatic flash control 403 by the user, further, when the tablet receives an operation of triggering the document shooting control 402 by the user and detects that the brightness of the current shooting picture is lower than the brightness threshold, the flash is automatically turned on; alternatively, when the tablet receives an operation that the user triggers the document capturing control 402 and detects that the shadow area (which may be understood as the area occupied by the area with brightness lower than the brightness threshold) in the current frame exceeds the area threshold, the flash is automatically turned on. For example, when a user performs document shooting with a document shooting mode in the tablet in a scene irradiated by a light source, the tablet may detect that there may be a shadow portion (e.g., a shadow of the user or a shadow of the tablet) in a shooting picture, so that the tablet may automatically turn on a flash based on the automatic flash control 403 during shooting, so as to avoid an influence of the shadow portion on a document shooting effect.
In a possible implementation manner, in the interface shown as b in fig. 4, after the tablet receives the operation of triggering the flash on control 407 by the user, further, when the tablet receives the operation of triggering the document shooting control 402 by the user, the tablet may turn on the flash for document shooting.
In a possible implementation manner, in the interface shown as b in fig. 4, after the tablet receives the operation of triggering the turn-off flash control 408 by the user, further, when the tablet receives the operation of triggering the document shooting control 402 by the user, the tablet may turn off the flash for document shooting.
In a possible implementation manner, in the interface shown as b in fig. 4, after the tablet receives an operation of triggering the normally-on flash control 409 by the user, the tablet turns on the flash, and after the tablet turns on the flash, a prompt message may be displayed in a preview interface for capturing a document, where the prompt message may be "flash, normally-on", and further, when the tablet receives an operation of triggering the document capturing control 402 by the user, the tablet may capture a document according to the already-turned-on flash. And after the tablet finishes one-time document shooting, the flash still keeps the on state, and the flash is not exited from the continuously on state until the user clicks any one of the flash control 407, the automatic flash control 403 or the flash control 408 to be turned on, and the flash is switched to other working modes.
Based on this, the user can conveniently and quickly open the document shooting mode based on the embodiment corresponding to fig. 4, and further the utilization rate of the document shooting mode can be improved.
Further, on the basis that the user starts a document shooting mode in the camera application in the embodiment corresponding to fig. 4, the user can shoot a document through the embodiment corresponding to fig. 5 (or fig. 6).
In the embodiment of the application, under the condition that the user triggers the multi-shot control, the tablet can realize a multi-shot mode of the document (as in the embodiment corresponding to fig. 5); alternatively, in the case where the user does not trigger the multi-shot control, the tablet may implement a single document shot and display the document correction page (as in the embodiment corresponding to fig. 6).
Fig. 5 is a schematic interface diagram for implementing a multi-beat mode according to an embodiment of the present disclosure. In the embodiment corresponding to fig. 5, the terminal device is taken as a tablet, and the tablet takes vertical screen shooting as an example for illustration, and the example does not constitute a limitation on the embodiment of the present application.
In the interface shown in a in fig. 5, on the basis that the user triggers the multi-shooting control 500, when the tablet receives an operation that the user triggers the document shooting control 501, the tablet may acquire image data obtained based on camera shooting, and briefly display the interface shown in b in fig. 5, and further switch to the interface shown in c in fig. 5. As with the interface shown in a of FIG. 5, the multi-tap control 500 may be highlighted in a different color, bolded, or the like.
For the interface shown in a in fig. 5, a thumbnail of a portrait that has been obtained by shooting may be displayed in a control 502 for opening a gallery in the interface, and other contents displayed in the interface are similar to those in the interface shown in a in fig. 4, and are not described herein again.
As shown in b of fig. 5, the interface may display: the document frame 504, the floating window 505 overlaid on the document frame 504, the control 502 for opening the gallery, the document shooting control 501, and the like, and other contents displayed in the interface are similar to those in the interface shown in a in fig. 4, and are not described again here. The floating window 505 is used to indicate that the tablet has finished shooting and auto-correction processing for the document and saving the document picture to the gallery, as shown in b in fig. 5; thumbnails of the processed screens of the documents can be displayed in the control 502 for opening the gallery, and the content displayed in the control 502 for opening the gallery in the interface shown in b in fig. 5 is different from the content displayed in the control 502 for opening the gallery in the interface shown in a in fig. 5. Before the floating window 505 is displayed, the tablet performs background image acquisition and automatic document correction processing, and at this time, the document shooting control 501 may display a gray scale and is in an inoperable state. Optionally, during the automatic document correction process performed by the tablet, a rotating ring may be displayed near the document shooting control 501 to prompt the user that the correction process is being performed. The content displayed in the floating window 505 may be similar to the content of the document in the document frame shown in c in fig. 2, and is not described herein again.
As for the interface shown in c in fig. 5, a thumbnail of a screen after document processing may be displayed in the control 502 for opening a gallery in the interface, and the content displayed in the control 502 for opening a gallery in the interface shown in c in fig. 5 is different from the content displayed in the control 502 for opening a gallery in the interface shown in a in fig. 5. Other contents displayed in the interface shown in c in fig. 5 may be similar to those displayed in the interface shown in a in fig. 5, and are not described in detail herein. Further, the user can perform the next document photographing using the interface shown as c in fig. 5, thereby realizing multi-photographing of the document. It will be appreciated that if the camera viewing range of the tablet is not changed, a preview image of the document may still be displayed in the interface shown in c of fig. 5, which is the same as that shown in a of fig. 5.
It will be appreciated that in the event that the multi-shot control is triggered, the tablet may automatically save the captured document picture to the gallery.
Based on the above, the user can select a suitable document shooting mode according to the self requirement, for example, when the user shoots a plurality of documents, the rapid shooting of the plurality of document pictures can be realized based on the embodiment as shown in fig. 5.
Exemplarily, fig. 6 is a schematic interface diagram for implementing a single-shot mode according to an embodiment of the present application. In the embodiment corresponding to fig. 6, the terminal device is taken as a tablet, and the tablet takes vertical screen shooting as an example for illustration, and the example does not constitute a limitation on the embodiment of the present application.
In the interface shown in a in fig. 6, on the basis that the multi-shooting control 600 is not triggered by the user, when the tablet receives an operation that the user triggers the document shooting control 601, the tablet may perform automatic correction processing on image data acquired based on camera shooting, and temporarily display the interface shown in b in fig. 6, and further switch to the interface shown in c in fig. 6.
For example, as shown in an interface a in fig. 6, a thumbnail of a portrait that has been obtained by shooting may be displayed in a control 602 for opening a gallery in the interface, and other contents displayed in the interface are similar to those in the interface a in fig. 5 and are not described herein again.
An interface, shown as b in fig. 6, which may be used to indicate that the photographed document is in the process of the rectification process, may display therein: the preview 603, the control 602 for opening the gallery, the document shooting control 601 and the like, and other contents displayed in the interface are similar to those in the interface shown in a in fig. 6, and are not described again here. Wherein, the control 602 for opening the gallery in the interface can display the thumbnail of the portrait that has been shot, and the document shooting control 601 in the interface can display the thumbnail in grayscale and is in an inoperable state. Optionally, as shown in the interface b in fig. 6, during the automatic document correction process performed by the tablet, a rotating ring may be displayed near the document capturing control 601 to prompt the user that the correction process is being performed.
As shown in c in fig. 6, the interface may be a preview interface after the document correction process is completed, and the interface may be used to support the user to perform manual document correction, and the interface may display: a preview 603, an image 604 overlaid on the preview 603, a document rectification control 605, a delete control 606, a save control 607, a control for opening a gallery 602, a document capture control 601, and the like. The image 604 is used for previewing the acquired image after the document correction processing; the document shooting control 601, the gallery opening control 602 and other controls can be displayed in a gray scale and are in an inoperable state; the image 604 can be highlighted, and the content displayed by the image 604 can be similar to the content of the document in the document frame shown in c in fig. 2, and is not described herein again; the document rectification control 605, the delete control 606, and the save control 607 may be grayed out and in an operable state.
In a possible implementation manner, in the interface shown as c in fig. 6, after the tablet receives an operation that the user triggers the document rectification control 605, the user may perform a screen adjustment on the image 604 to select an appropriate document content in the image 604, and then the tablet may perform further document rectification on the document content in the image 604 after the screen adjustment. For example, as shown in the interface of c in FIG. 6, the user may select an appropriate document region by dragging the four corner boxes of the image 604.
In a possible implementation, in the interface shown in c in fig. 6, after the tablet receives an operation that the user triggers the document rectification control 605, the tablet may display a document selection box (not shown in the interface shown in c in fig. 6), which may temporarily frame the edge of the image 604, and then the user may frame a suitable document area in the image 604 by adjusting the document selection box, for example, dragging four corners of the document selection box.
In a possible implementation, in the interface shown as c in fig. 6, when the tablet receives an operation that the user triggers the delete control 606, the tablet may delete the image 604 and display the interface shown as d in fig. 6.
In a possible implementation manner, in the interface shown as c in fig. 6, when the tablet receives an operation that the user triggers the save control 607, the tablet may perform document rectification again on the image 604 (or the document content after the screen adjustment in the image 604) and save the document into the gallery, and display the interface shown as d in fig. 6. The interface shown as d in fig. 6 may be used for performing next document shooting and document rectification, a thumbnail corresponding to the image 604 may be displayed in the control 602 for opening the gallery in the interface, and other contents displayed in the interface are similar to those in the interface shown as a in fig. 6, and are not described herein again. It will be appreciated that if the camera viewing range of the tablet is not changed, a preview image of the document may still be displayed in the interface shown as d in fig. 6, which is the same as that shown as a in fig. 6.
In a possible implementation manner, as the interface shown in c in fig. 6, the interface may further include a control for sharing (not shown in the interface shown in c in fig. 6), and further, when the tablet receives an operation that the user triggers the control for sharing, the tablet may implement sharing the image 604 to other applications or other devices, and the like.
It is to be understood that, as shown in c in fig. 6, the interface may further include a control for renaming, a control for collecting, a control for rotating, a control for adding a remark, a control for printing, or a control for identifying text in a document, and the control that may be displayed in the interface may include other contents according to an actual scene, which is not limited in this embodiment of the application.
Based on this, the user can select a suitable document shooting mode according to the self requirement, for example, when the user shoots a single document, the user can shoot a single document picture based on the embodiment corresponding to fig. 6 and timely adjust the document so as to obtain a suitable document picture.
On the basis of the embodiment corresponding to fig. 4, in a possible implementation manner, when the proportion of the document selection box detected by the terminal device in the current picture is lower than a preset threshold, the terminal device may prompt the user to approach the document for shooting.
Fig. 7 is a schematic interface diagram of a prompt user according to an embodiment of the present disclosure. In the embodiment corresponding to fig. 7, the terminal device is taken as a tablet, and the tablet takes vertical screen shooting as an example for illustration, and the example does not constitute a limitation on the embodiment of the present application.
As shown in fig. 7, when the tablet receives an operation of opening a document shooting mode by a user, and the tablet detects that a proportion of the document selection box 701 in the current screen is lower than a preset threshold, or the tablet detects that a distance from the document currently exceeds a distance threshold, the tablet may display a prompt message 702, where the prompt message 702 may be: a document is being detected and please take a shot of the device close to the document. Other display contents in the interface shown in fig. 7 are similar to those in the interface shown in a in fig. 5, and are not described herein again.
Based on the above, in the document shooting process, even if the distance between the user and the document is long, the user can be close to the document shooting based on the prompt information displayed on the terminal equipment, and further the terminal equipment can identify and obtain a clearer document picture.
On the basis of the embodiment corresponding to fig. 5, in a possible implementation manner, in the interface shown as a in fig. 5, after the user triggers the multi-shot control 500 and takes a document photo, the user may perform document rectification on a document picture obtained through a document-taking mode in the gallery.
For example, fig. 8 is a schematic interface diagram of document rectification provided in an embodiment of the present application. In the embodiment corresponding to fig. 8, the terminal device is taken as a tablet, and the tablet takes vertical screen shooting as an example for illustration, and the example does not constitute a limitation on the embodiment of the present application.
When the tablet receives an operation of opening the gallery application by the user, the tablet may display an interface as shown in a in fig. 8, which may include one or more of the following, for example: a text box for searching for a photo, a thumbnail corresponding to picture 801, a thumbnail corresponding to picture 802, a thumbnail corresponding to picture 803, a thumbnail corresponding to picture 804, and the like. Wherein the picture 801 can be taken today, and the pictures 802, 803, and 804 can be taken yesterday.
In the interface shown as a in fig. 8, when the tablet receives an operation that the user triggers the thumbnail corresponding to the picture 801, the tablet may display the interface shown as b in fig. 8. An interface, shown as b in fig. 8, which may include one or more of the following: a picture 801, a document rectification control 805, a control for viewing more information from the picture 801, a sharing control, a collection control, an editing control, a deletion control or more controls, and the like. The content displayed in the picture 801 may be the same as the document content in the document screen in the interface shown in c in fig. 2, and is not described herein again.
Further, in the interface shown as b in fig. 8, when the tablet receives an operation that the user triggers the document rectification control 805, the tablet may display the interface shown as c in fig. 8. An interface, as shown at c in fig. 8, which may include one or more of the following, for example: picture 801, document selection box 806, delete control 807, restore control 808, and save control 809. Further, the user can select an appropriate document content through the document selection box 806, for example, the box selects a content corresponding to "completed task" in the picture 801.
In a possible implementation, in the interface shown as c in fig. 8, when the tablet receives an operation that the user triggers the delete control 807, then the tablet may delete the picture 801.
In a possible implementation manner, in the interface shown as c in fig. 8, when the tablet receives an operation that the user triggers the restore control 808, the tablet may cancel the document selection box 806 selected by the user and restore the content corresponding to the picture 801.
In a possible implementation manner, in the interface shown as c in fig. 8, when the tablet receives an operation that the user triggers the save control 809, the tablet may save the document picture corresponding to the document selection box 806.
It can be understood that, even if the terminal device does not receive the operation of triggering the multi-shot control 600 by the user, and in the interface shown in a in c in fig. 6, the user has triggered the saving control 607, the user can perform the document rectification again on the document content (the image 604 shown in c in fig. 6) after the document rectification in the gallery based on the document rectification process in the corresponding embodiment in fig. 8.
Based on the above, even though the document identification of the document may be inaccurate in the document picture obtained by the terminal device based on the document shooting mode, the user can select the document content to be identified through manual adjustment.
Exemplarily, fig. 9 is a schematic flowchart of document shooting according to an embodiment of the present application. As shown in fig. 9, the document capturing process may include the steps of:
s901, the terminal equipment receives the operation of opening the document shooting mode by the user.
In the embodiment of the present application, a user may open a document shooting mode with reference to the embodiment corresponding to fig. 4.
In a possible implementation manner, when the terminal device detects that the proportion of the frame occupied by the document selection box is lower than the preset threshold, the terminal device may display the prompt information based on the embodiment as shown in fig. 7, so as to prompt the user to shoot the terminal device close to the document.
And S902, the terminal equipment sends the effect configuration parameters to the ISP module, pushes the lens to the position indicated by the focusing parameters by using a motor based on the quick focusing module, and searches for a focus near the position indicated by the focusing parameters.
The focus point in the embodiment of the present application may be understood as an aiming point used when photographing after the final search is stopped.
In the embodiment of the present application, the effect configuration parameter is a parameter for supporting document shooting in a document shooting mode, for example, the effect configuration parameter may include one or more of the following: camera sensor map parameters, exposure parameters, focus parameters, contrast parameters, sharpness parameters, or the like. The preset position may be understood as a position indicated by a focusing parameter in the effect configuration parameter. The focusing parameters may be obtained by learning historical focusing data obtained when one or more users utilize the terminal device to shoot documents, and the historical focusing data may include: historical photographing distance data, historical focus distance data, and the like.
For example, learning of the historical focusing data may result in that most users have a shooting distance of 30 centimeters (cm) to 40cm when shooting a document of a4 screen size, and the focal distance data taken for the shooting distance of 30cm to 40cm may be position (pos) 0, and the terminal device may set the pos0 as a focusing parameter. For example, when the terminal device takes a document, the terminal device may instruct the motor to push the lens to a position corresponding to pos0, and then the lens may focus at a position corresponding to pos0, and then the lens starts to start auto-focus again, searching for an in-focus point near the position of pos 0.
In a possible implementation manner, the terminal device (or a server communicating with the terminal device) may also obtain multiple sets of corresponding relationships based on learning of historical shooting distance data, historical focal length data, and the like in the historical focusing data. Any one of the sets of corresponding relations is used for indicating the corresponding relation between the shooting distance and the focusing parameter.
Taking the example of storing the multiple sets of corresponding relationships in the terminal device, when the terminal device performs document shooting, the terminal device may measure a shooting distance between the terminal device and a shot object by using an infrared transmitter (for transmitting infrared rays) or an ultrasonic transmitter (for generating ultrasonic waves), and when the terminal device can find a focusing parameter corresponding to the shooting distance from the multiple sets of corresponding relationships, the terminal device may instruct the motor to push the lens to a position corresponding to the focusing parameter, and the lens may focus at the position corresponding to the focusing parameter, and start auto-focusing, and search for an quasi-focus point near the position of the focusing parameter.
Taking the case that the server stores the plurality of groups of corresponding relations, when the terminal device shoots the document, the terminal device can utilize an infrared transmitter or an ultrasonic transmitter and the like to measure the shooting distance between the terminal device and the shot object; further, the terminal device may send a request for querying a focusing parameter corresponding to the shooting distance to the server, and when the server can query the focusing parameter corresponding to the shooting distance, the server may send the focusing parameter to the terminal device, and then the terminal device may instruct the motor to push the lens to a position corresponding to the focusing parameter, and the lens may perform focusing at the position corresponding to the focusing parameter, and start auto-focusing, and search for a quasi-focus point near the position of the focusing parameter.
It can be understood that the terminal device can improve the focusing speed based on the method of focusing at the preset position.
And S903, the terminal equipment receives the operation that the user triggers the document shooting control, and the camera sensor generates image data.
In the embodiment of the present application, a user can implement multi-shot of a document with reference to the embodiment corresponding to fig. 5 to obtain image data; or the user can realize the single shooting of the document by referring to the embodiment corresponding to fig. 6, and the image data is obtained.
In a possible implementation manner, in the case that the multi-shooting mode is turned on, the user may perform further document rectification on the document picture obtained in the multi-shooting mode in the gallery, with reference to the embodiment corresponding to fig. 8.
And S904, processing the image data by an ISP module in the terminal equipment to obtain the image data processed by the ISP.
The obtained ISP processed image data may include photo stream data and preview stream data. The format of the photo stream data can be converted from RAW (or referred to as RAW image) format to YUV (or referred to as luminance and chrominance) format.
And S905, an image processing module in the terminal equipment performs image post-processing on the image data processed by the ISP to obtain the image data after the image processing.
Wherein the image post-processing may include one or more of: YUV domain image noise reduction processing, HDR processing, or character definition improvement processing, etc. The YUV domain image denoising process may include: the method comprises the steps of YUV domain single-frame image denoising processing, YUV domain multi-frame image denoising processing and the like; the text sharpness enhancement process may include a Super Resolution (SR) process, which may include: single frame SR processing, multi-frame SR processing, and the like. S906, a document identification module in the terminal equipment performs document identification on the image data after the image processing to obtain the image data after the document identification.
And S907, an image correction module in the terminal device performs document correction processing on the image data after the document identification to obtain the image data after the document correction. Further, the document-corrected image data may be displayed on a terminal device, or the document-corrected image may be stored in a library in a code.
Based on this, terminal equipment can realize when receiving the operation that the user opened the document and shoot the function, sets up suitable focus according to the habit of user's shooting document for terminal equipment can realize focusing fast based on this suitable focus, promotes focusing speed and document and shoots the speed.
On the basis of the embodiment corresponding to fig. 9, in a possible implementation manner, S905 may include: the terminal device may determine whether the current scene is an HDR scene based on the preview stream data in the step shown in S904, and when the terminal device determines that the current scene is not an HDR scene, the terminal device may process the photographed stream data by using a YUV domain image denoising processing (hereinafter, referred to as image denoising processing) method and a text sharpness enhancement processing (for example, SR processing) method to obtain target data; or, when the terminal device determines that the current scene is the HDR scene, the terminal device may process the photo stream data by using an HDR algorithm and a text sharpness improvement processing method (e.g., SR processing) to obtain target data.
Exemplarily, fig. 10 is a schematic flowchart of an image post-processing method according to an embodiment of the present application. As shown in fig. 10, the image post-processing method may include the steps of:
s1001, the terminal device determines whether the current scene is an HDR scene.
In this embodiment of the application, when the terminal device determines that the current scene is the HDR scene, the terminal device may execute the steps shown in S1002-S1003; alternatively, when the terminal device determines that it is not an HDR scene, the terminal device may perform the steps shown in S1004-S1008.
Illustratively, the terminal device may obtain multiple frames of preview stream data, and determine whether the current scene is an HDR scene based on a ratio of HDR images in the multiple frames of preview stream data; alternatively, the terminal device may acquire single-frame preview stream data and determine whether the current scene is an HDR scene based on the proportion of the highlight pixels in the single-frame preview stream data. The HDR image may be an image in which a proportion of the highlight pixels exceeds a proportion threshold, and the highlight pixels may be pixel points in which a gray value of the pixel is greater than a gray threshold.
In a possible implementation manner, the terminal device may also perform multiple (e.g., 4) down-sampling on the preview stream data to obtain a preview thumbnail; the terminal device may obtain the multiple frames of preview data in the preview thumbnail, and then determine whether the current scene is an HDR scene based on the proportion of the HDR image in the multiple frames of preview data, or the terminal device may also obtain the single frame of preview data in the preview thumbnail, and then determine whether the current scene is an HDR scene based on the proportion of the highlighted pixel in the single frame of preview data. It can be appreciated that performing HDR scene determination based on the preview thumbnail can save memory usage.
It is understood that the method for determining whether the current scene is the HDR scene may include other contents according to the actual scene, and this is not limited in the embodiment of the present application.
S1002, the terminal device collects multi-frame photo stream data with different exposure degrees and performs multi-frame image fusion processing to obtain single-frame image data after fusion processing.
In the embodiment of the present application, the multi-frame photo stream data with different exposure degrees may include one or more of the following: normal frame data, long frame data, or short frame data, etc. The long frame data may be used to promote an over-dark area in the normal frame data, and the short frame data may be used to suppress an over-exposed area in the normal frame data.
For example, the terminal device may perform multi-frame image fusion processing on the multi-frame photo stream data with different exposure degrees by using an algorithm such as a brightness gradient method, a bilateral filtering method, or a laplacian pyramid to obtain single-frame image data after fusion processing, or the terminal device may also perform multi-frame image fusion processing on the multi-frame photo stream data with different exposure degrees based on the first machine learning model to obtain single-frame image data after fusion processing. The first machine learning module can be obtained by training based on multi-frame photographing stream sample data.
It can be understood that the number of the multi-frame photo stream data and the processing method of the multi-frame image fusion may include other contents according to an actual scene, which is not limited in this embodiment of the application.
And S1003, the terminal equipment performs single-frame SR processing on the single-frame image data after the fusion processing to obtain target data.
In the embodiment of the application, the single frame SR processing can be understood as adding pixel points from the horizontal direction of the image and the longitudinal direction of the image, so as to improve the definition of the single frame image. Wherein the added pixel point may be related to surrounding pixel points at the location of the added pixel point.
For example, the terminal device may perform single frame SR processing on the single frame image data after the fusion processing by using a second machine learning model to obtain target data. Wherein the second machine learning module may be derived based on training of a single frame of image sample data.
S1004, the terminal device may determine whether to adopt a multi-frame image noise reduction processing method.
In this embodiment of the application, when the terminal device determines to adopt the multi-frame image denoising processing method, the terminal device may execute the steps shown in S1005-S1006; alternatively, when the terminal device determines that the multi-frame image noise reduction processing method is not employed, the terminal device may execute the steps shown in S1007-S1008. The multi-frame image denoising processing can have a better denoising effect.
S1005, the terminal device collects multi-frame photographing stream data under different noise conditions, and performs multi-frame image denoising processing to obtain single-frame image data after the multi-frame image denoising processing.
In the embodiment of the present application, the multi-frame image denoising processing may be understood as a processing method for performing weighted fusion on the multi-frame photographing stream data by using algorithms such as multi-scale fusion or block fusion based on the noise characteristics of each frame of photographing stream data, and a specific multi-frame image denoising processing method is not limited in the embodiment of the present application. The multi-scale fusion algorithm can be understood as a fusion method under different resolutions, and the block fusion algorithm can be understood as a fusion method under different brightness regions.
And S1006, the terminal equipment performs single-frame SR processing on the single-frame image data subjected to the multi-frame image noise reduction processing to obtain target data.
It is to be understood that the method of the single frame SR processing in the step shown in S1006 may be similar to the method of the single frame SR processing in the step shown in S1003, and is not described herein again.
S1007, the terminal device collects multi-frame photographing stream data under different noise conditions, and performs single-frame image denoising processing respectively to obtain multi-frame image data after the single-frame image denoising processing.
For example, the method of wavelet transform, bilateral filtering, and the like may be used to perform single-frame image denoising on the multiple frames of photographed-stream data, so as to obtain multiple frames of image data after single-frame image denoising.
And S1008, the terminal device performs multi-frame SR processing on the multi-frame image data subjected to the single-frame image denoising processing to obtain target data.
In the embodiment of the application, the multi-frame SR processing may be understood as a processing method for performing weighted fusion on multi-frame image data after the single-frame image denoising processing, so that terminal equipment may obtain target data with better definition and richer details.
For example, the terminal device may perform multi-frame SR processing on the multi-frame image data after the single-frame image denoising processing by using a third machine learning model to obtain target data. The third machine learning module may be obtained based on training of multiple frames of image sample data.
In a possible implementation manner, the image post-processing method in the embodiment corresponding to fig. 10 may be implemented in a terminal device, or may also be implemented in a server connected to the terminal device, which is not limited in this embodiment of the application.
Based on this, the terminal device may obtain a document picture with a better picture effect according to different scenes based on the image post-processing method in the embodiment corresponding to fig. 10.
It is understood that the interface described in the embodiments of the present application is only an example, and is not to be construed as further limiting the embodiments of the present application.
Based on the content described in the foregoing embodiments, in order to better understand the embodiments of the present application, fig. 11 is a schematic flowchart of a document shooting method provided in the embodiments of the present application.
As shown in fig. 11, the document photographing method may include the steps of:
s1101, the terminal device displays a first interface.
In the embodiment of the application, the first interface may include a control for document shooting and a first preview image. Wherein the first interface may be an interface as shown in a in fig. 4, the control for document photographing may be a document photographing control 402 in the interface as shown in a in fig. 4, and the first preview image may be a document screen displayed in the interface as shown in a in fig. 4.
And S1102, the terminal equipment pushes the lens to the position indicated by the focusing parameter by using a motor.
In the embodiment of the present application, the focusing parameter is a preset constant, and the process of obtaining the focusing parameter may refer to the description of the focusing parameter in S902, which is not described herein again.
And S1103, the terminal device starts focusing from the position indicated by the focusing parameter.
Illustratively, the terminal device starts a focus search from a position indicated by the focus parameter and searches for a focus point near the position indicated by the focus parameter. The focus point can be understood as the focus point used when photographing after the final search is stopped.
S1104, the terminal device receives operation of the control used for document shooting.
In this embodiment of the present application, the operation of the control for document shooting may include a click operation or a slide operation, which is not limited in this embodiment of the present application.
S1105, responding to the operation of the control for document shooting, the terminal equipment acquires a first image when focusing is completed.
In the embodiment of the present application, the first image acquired when focusing is completed may be understood as the first image acquired at the quasi-focus point. The definition of the quasi-focal point is described in S902.
Based on this, terminal equipment can realize when receiving the operation that the user opened the document and shoot the function, sets up suitable focus according to the habit of user's shooting document for terminal equipment can realize focusing fast based on this suitable focus, promotes focusing speed and document and shoots the speed.
Optionally, before S1101, the method further includes: the terminal equipment receives an operation of opening a first application; responding to the operation of opening the first application, and displaying a second interface by the terminal equipment; the second interface comprises a first menu bar; the first menu bar comprises a control for starting a document shooting mode and a control for starting a shooting mode; the method comprises the steps that terminal equipment receives operation of selecting a control for starting a document shooting mode; the terminal equipment displays a first interface, and comprises: and responding to the operation of selecting the control for starting the document shooting mode, and displaying a first interface by the terminal equipment.
In the embodiment of the application, the first application may be a camera application; the second interface can be understood as the corresponding interface when the camera application is opened; for example, the second interface may be an interface corresponding to a photographing mode, the first menu bar may be a one-level menu 400 in the interface shown in a in fig. 4, the control for starting the document photographing mode may be a document-photographing control 401 in the interface shown in a in fig. 4, and the control for starting the photographing mode may be a photographing control in the interface shown in a in fig. 4. The document shooting mode may be understood as a document shooting mode in the embodiment of the present application.
Optionally, the method further includes: the terminal equipment receives the operation of selecting the control for starting the photographing mode; in response to the operation of selecting the control for starting the photographing mode, the terminal equipment pushes the lens to a second position by using the motor; the second position is different from the position indicated by the focusing parameter.
In the embodiment of the present application, the second position may be understood as an infinite distance. For example, when the terminal device receives an operation of selecting a control for turning on a photographing mode, the terminal device may push a lens to an infinite distance by using a motor to start a focus search until a focus point is searched. Further, when the terminal device receives an operation that the user selects a control for taking a picture in an interface corresponding to the picture taking mode, the terminal device may obtain a picture to be taken based on the searched quasi-focus point. It can be understood that, in the document shooting mode, the terminal device may start focusing search from the position indicated by the focusing parameter, and compared with the terminal device that comes from infinity to focus search in the photographing mode, the focusing speed in the document shooting mode is faster, and further the document shooting speed can be improved.
Optionally, the method further comprises S1106 (not shown in fig. 11): and the terminal equipment processes the document content in the first image to obtain a second image.
In the embodiment of the present application, the second image may be understood as an image after document automatic correction processing corresponding to a series of document shooting modes.
Optionally, the first interface further includes a control for implementing the multi-beat mode, and when the control for implementing the multi-beat mode is not selected, S1106 includes: the terminal equipment displays a third interface; wherein the third interface comprises one or more of the following: a third image, a control for document rectification, a control for saving the third image, or a control for deleting the second image; the terminal equipment receives the operation of selecting the control for saving the third image; the third image is a part or all of the second image; in a third interface, a third image is overlaid on the preview image corresponding to the first image; in response to the operation of selecting the control for saving the third image, the terminal device corrects the third image and saves the third image to the second application.
In the embodiment of the present application, the control for implementing the multi-beat mode may be a multi-beat control 600 in the interface shown in a in fig. 6; the third interface may be an interface as shown in c in fig. 6; in the interface shown in c in fig. 6, the third image may be an image 604, the control for document rectification may be a document rectification control 605, the control for saving the third image may be a saving control 607, the control for deleting the second image may be a deleting control 606, and the operation of selecting the control for saving the third image may be a triggering operation for the saving control 607; the second application may be a gallery application.
Optionally, the method further includes: the terminal equipment receives the operation of selecting a control for document correction; in response to an operation of selecting a control for document rectification, the screen size of the third image is in an editable state; the terminal equipment receives an operation aiming at the third image; in response to the operation for the third image, the terminal device performs document rectification on the screen-size-processed document in the third image.
In the embodiment of the present application, as shown in the interface c in fig. 6, the operation of selecting a control for document rectification may be a trigger operation on the document rectification control 605, the operation on the third image may be a drag operation on four corner boxes in the image 604, and the like; the editable state may be understood as a state in which the image 604 is resized.
Optionally, the first interface further includes a control for implementing the multi-beat mode, and when the control for implementing the multi-beat mode is selected, S1106 includes: the terminal equipment processes the document content in the first image to obtain a second image; the method further comprises the following steps: the terminal equipment stores the second image and switches to a fourth interface; the fourth interface comprises a control used for document shooting and a second preview image.
In the embodiment of the present application, the control for implementing the multi-beat mode may be a multi-beat control 500 in the interface shown in a in fig. 5; the fourth interface may be an interface as shown in c in fig. 5, and the control for document shooting in the fourth interface may be a document shooting control 501 in the interface as shown in c in fig. 5.
Optionally, the method further includes: the terminal equipment receives the operation of selecting the second application; responding to the operation of selecting the second application, and displaying a fifth interface by the terminal equipment; the fifth interface comprises the identifier of the second image; the terminal equipment receives the operation of selecting the identifier of the second image; responding to the operation of selecting the identifier of the second image, and displaying a sixth interface by the terminal equipment; the sixth interface comprises a second image and a control for document rectification of the second image; the terminal equipment receives the operation of selecting a control for document correction of the second image; responding to the operation of selecting the control for document rectification of the second image, and displaying a seventh interface by the terminal equipment; wherein the seventh interface comprises one or more of: a first document selection box for selecting some or all of the documents in the second image, a control for saving the documents in the first document selection box, a control for deleting the second image, or a control for restoring the second image.
In this embodiment of the application, the operation of selecting the second application may be a trigger operation for the gallery application; the fifth interface may be an interface as shown in a in fig. 8, and the identifier of the second image may be a thumbnail corresponding to the picture 801 in the interface as shown in a in fig. 8; the operation of selecting the identifier of the second image may be a trigger operation for a thumbnail corresponding to the picture 801; the sixth interface may be an interface as shown in b in fig. 8; as shown in b of fig. 8, the second image may be a picture 801, and the control for document rectification of the second image may be a document rectification control 805; the operation of the control selected for document rectification of the second image may be a trigger operation for document rectification control 805; the seventh interface may be an interface as shown in c in fig. 8; in the interface shown in c in fig. 8, the first document selection box for selecting some or all of the documents in the second image may be a document selection box 806, the control for saving the documents in the first document selection box may be a save control 809, the control for deleting the second image may be a delete control 807, and the control for restoring the second image may be a restore control 808.
Optionally, the terminal device further includes one or more of the following: the system comprises a camera sensor, an image signal processing ISP module, an image processing module, a document identification module or an image correction module; s1106 includes: the terminal equipment utilizes the ISP module to process image data corresponding to the document content in the first image output by the camera sensor to obtain ISP processed image data; the terminal equipment utilizes the image processing module to perform image post-processing on the image data processed by the ISP to obtain the image data after the image processing; the terminal equipment performs document identification on the image data after the image processing by using a document identification module to obtain a target image after the document identification; and the terminal equipment utilizes the image correction module to perform document correction on the target image after the document identification to obtain a second image.
Optionally, the image data after ISP processing includes photo stream data and preview stream data, and the image post-processing includes one or more of the following: image noise reduction processing, high dynamic scene HDR image fusion processing or character definition improving processing; the method for processing the image data processed by the ISP by the terminal equipment by utilizing the image processing module to obtain the image data processed by the image comprises the following steps: the terminal equipment judges whether the preview stream data meets an HDR scene; when the terminal equipment determines that the preview stream data meets the HDR scene, the terminal equipment utilizes the image processing module to perform HDR image fusion processing and single-frame character definition improving processing on the multi-frame shot stream data with different exposure degrees; when the terminal equipment determines that the preview stream data does not meet the HDR scene, the terminal equipment utilizes the image processing module to perform multi-frame image noise reduction processing and single-frame character definition improvement processing on the shooting stream data with different multi-frame noise conditions.
In this embodiment, the image denoising process may include: the image denoising method comprises single-frame image denoising processing and multi-frame image denoising processing, specifically, the image denoising processing can be YUV domain image denoising processing. The character definition improving processing comprises the following steps: the method comprises single-frame character definition improving processing and multi-frame character definition improving, and specifically, the method for the character definition improving processing can be an SR processing method. The step of determining the HDR scene may refer to the step of determining whether the current HDR scene is in S1001, and is not described herein again.
Optionally, the first interface further includes a control for automatically turning on the flash, and further includes: the terminal equipment receives the operation of selecting a control for automatically starting a flash lamp; responding to the operation of selecting the control for automatically starting the flash lamp, and displaying an eighth interface by the terminal equipment; wherein, the eighth interface comprises one or more of the following: a control for automatically turning on the flash, a control for turning off the flash, a control for turning on the flash, or a control for normally on the flash, which are displayed in a highlighted form.
In the embodiment of the present application, the control for automatically turning on the flash may be an automatic flash control 403 in the interface shown in a in fig. 4; the eighth interface may be an interface as shown in b of fig. 4; as shown in b of fig. 4, the control for automatically turning on the flash displayed in a highlighted form may be an automatic flash control 403 highlighted in different colors, or in a bold manner, the control for turning off the flash may be a flash off control 408, the control for turning on the flash may be a flash on control 407, and the control for turning on the flash may be a flash on control 409.
Optionally, the method further includes: responding to the operation of selecting the control for normally lighting the flash lamp, and lighting the flash lamp by the terminal equipment; in response to an operation of a control for document shooting, the terminal device acquires a first image in a state where a flash is normally on when focusing is completed.
Optionally, the method further includes: when the terminal device does not receive the operation of the control for turning off the flash lamp, the control for turning on the flash lamp or the control for normally turning on the flash lamp within the first time threshold, the terminal device switches to the first interface.
For example, in the interface shown in b in fig. 4, when the terminal device does not receive a trigger for turning on the flash control 407, turning off the flash control 408, or turning on the flash control 409 for 5 seconds, 10 seconds, or the like, the terminal device may display the interface shown in a in fig. 4.
Optionally, the first interface further includes a second document selection box for framing the document detected by the terminal device, and the method further includes: when the terminal equipment detects that the proportion of the second document selection frame in the picture is smaller than a first threshold value, the terminal equipment displays prompt information; the prompt message is used for indicating that the terminal equipment is close to the document for shooting.
In this embodiment of the application, as in the interface shown in fig. 7, the second document selection box may be a document selection box 701, the prompt message may be a prompt message 702, and the prompt message 702 may be: a document is being detected and please take a shot of the device close to the document.
Optionally, the ISP module is included in the terminal device, and the terminal device pushes the lens to the position indicated by the focusing parameter by using a motor, where the ISP module includes: the terminal equipment sends the effect configuration parameters to the ISP module and pushes the lens to the position indicated by the focusing parameters by using the motor; the effect configuration parameters include focusing parameters.
Optionally, the effect configuration parameters further include one or more of the following: a camera sensor map parameter, an exposure parameter, a contrast parameter, a sharpness parameter, or a sharpness parameter.
Optionally, a plurality of sets of corresponding relationships are stored in the terminal device, where one set of corresponding relationships is used to indicate a relationship between the focusing parameter and the shooting distance; s1102 includes: the terminal equipment determines the shooting distance between the terminal equipment and the shot document; and the terminal equipment determines a focusing parameter corresponding to the shooting distance from the corresponding relation, and pushes the lens to a position indicated by the focusing parameter by using a motor.
In the embodiment of the present application, the corresponding relationship may refer to the corresponding relationship in the step shown in S902, and is not described herein again.
Optionally, the shooting distance is 30cm-40 cm.
Optionally, the first application is a camera application.
The method provided by the embodiment of the present application is explained above with reference to fig. 4 to fig. 11, and the apparatus provided by the embodiment of the present application for performing the method is described below. As shown in fig. 12, fig. 12 is a schematic structural diagram of a document shooting device provided in the embodiment of the present application, where the document shooting device may be a terminal device in the embodiment of the present application, and may also be a chip or a chip system in the terminal device.
As shown in fig. 12, the document camera 120 may be used in a communication device, circuit, hardware component, or chip, the document camera including: display unit 1201, processing unit 1202. Wherein the display unit 1201 is used to support the step of displaying performed by the document photographing method; the processing unit 1202 is used to support the steps of the document photographing apparatus performing information processing.
In one possible embodiment, the document photographing apparatus may further include: a storage unit 1204. The processing unit 1202 and the storage unit 1204 are connected by a line.
The storage unit 1204 may include one or more memories, which may be devices in one or more devices or circuits for storing programs or data.
The storage unit 1204 may be separately provided and connected to the processing unit 1202 of the document photographing device through a communication line. The memory unit 1204 may also be integrated with the processing unit 1202.
In one possible embodiment, the document photographing apparatus may further include: a communication unit 1203. The communication unit 1203 may be an input or output interface, pin or circuit, or the like.
The storage unit 1204 may store computer-executable instructions of the methods in the terminal device to cause the processing unit 1202 to execute the methods in the above-described embodiments.
The storage unit 1204 may be a register, a cache memory, a RAM, or the like, and the storage unit 1204 may be integrated with the processing unit 1202. The memory unit 1204 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, and the memory unit 1204 may be separate from the processing unit 1202.
Fig. 13 is a schematic diagram of a hardware structure of a control device according to an embodiment of the present application, and as shown in fig. 13, the control device includes a processor 1301, a communication line 1304, and at least one communication interface (an exemplary case of the communication interface 1303 in fig. 13 is described as an example).
The processor 1301 may be a general processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present disclosure.
The communication lines 1304 may include circuitry to communicate information between the above-described components.
Communication interface 1303 may be implemented using any transceiver or the like for communicating with other devices or communication networks, such as ethernet, Wireless Local Area Networks (WLANs), etc.
Possibly, the control device may also comprise a memory 1302.
The memory 1302 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be separate and coupled to the processor via communication line 1304. The memory may also be integral to the processor.
The memory 1302 is used for storing computer-executable instructions for executing the present invention, and is controlled by the processor 1301 to execute the instructions. The processor 1301 is configured to execute the computer-executable instructions stored in the memory 1302, so as to implement the document shooting method provided by the embodiment of the present application.
Possibly, the computer executed instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
In particular implementations, processor 1301 may include one or more CPUs, such as CPU0 and CPU1 in fig. 13, as one embodiment.
In particular implementations, for one embodiment, the control device may include multiple processors, such as processor 1301 and processor 1305 in fig. 13. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
Exemplarily, fig. 14 is a schematic structural diagram of a chip provided in an embodiment of the present application. Chip 140 includes one or more (including two) processors 1420 and a communication interface 1430.
In some embodiments, memory 1440 stores the following elements: an executable module or a data structure, or a subset thereof, or an expanded set thereof.
In the illustrated embodiment, memory 1440 may include both read-only memory and random-access memory, and may provide instructions and data to processor 1420. A portion of the memory 1440 may also include non-volatile random access memory (NVRAM).
In the illustrated embodiment, memory 1440, communication interface 1430, and memory 1440 are coupled together via bus system 1410. The bus system 1410 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For ease of description, the various buses are identified in FIG. 14 as bus system 1410.
The methods described in the embodiments of the present application may be applied to the processor 1420 or implemented by the processor 1420. Processor 1420 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1420. The processor 1420 may be a general-purpose processor (e.g., a microprocessor or a conventional processor), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an FPGA (field-programmable gate array) or other programmable logic device, discrete gate, transistor logic device or discrete hardware component, and the processor 1420 may implement or execute the methods, steps and logic blocks disclosed in the embodiments of the present invention.
The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium mature in the field, such as a random access memory, a read only memory, a programmable read only memory, or a charged erasable programmable memory (EEPROM). The storage medium is located in the memory 1440, and the processor 1420 reads the information in the memory 1440 and performs the steps of the above-described method in conjunction with the hardware thereof.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. Computer instructions may be stored in, or transmitted from, a computer-readable storage medium to another computer-readable storage medium, e.g., from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optics, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.), the computer-readable storage medium may be any available medium that a computer can store or a data storage device including one or more available media integrated servers, data centers, etc., the available media may include, for example, magnetic media (e.g., floppy disks, hard disks, or magnetic tape), optical media (e.g., digital versatile disks, DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), etc.
The embodiment of the application also provides a computer readable storage medium. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Computer-readable media may include computer storage media and communication media, and may include any medium that can communicate a computer program from one place to another. A storage medium may be any target medium that can be accessed by a computer.
As one possible design, the computer-readable medium may include a compact disk read-only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk storage; the computer readable medium may include a disk memory or other disk storage device. Also, any connecting line may also be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Combinations of the above should also be included within the scope of computer-readable media. The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (22)

1. A document shooting method is applied to a terminal device, the terminal device comprises a motor, the motor is used for controlling a lens in a camera to move, and the method comprises the following steps:
the terminal equipment displays a first interface, wherein the first interface comprises a control used for document shooting and a first preview image;
the terminal equipment pushes the lens to a position indicated by focusing parameters by using the motor; the focusing parameter is a preset constant;
the terminal equipment starts focusing from the position indicated by the focusing parameter;
the terminal equipment receives the operation of the control for document shooting;
responding to the operation of the control for document shooting, and acquiring a first image by the terminal equipment when the focusing is completed.
2. The method of claim 1, wherein before the terminal device displays the first interface, the method further comprises:
the terminal equipment receives an operation of opening a first application;
responding to the operation of opening the first application, and displaying a second interface by the terminal equipment; the second interface comprises a first menu bar; the first menu bar comprises a control for starting a document shooting mode and a control for starting a shooting mode;
the terminal equipment receives and selects the operation of the control for starting the document shooting mode;
the terminal equipment displays a first interface, and comprises: and responding to the operation of the selected control for starting the document shooting mode, and displaying the first interface by the terminal equipment.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the terminal equipment receives and selects the operation of the control for starting the photographing mode;
in response to the operation of selecting the control for starting the photographing mode, the terminal equipment pushes the lens to a second position by using the motor; the second position is different from the position indicated by the focusing parameter.
4. The method of claim 1, further comprising:
and the terminal equipment processes the document content in the first image to obtain a second image.
5. The method according to claim 4, wherein the first interface further comprises a control for implementing a multi-shot mode, and when the control for implementing the multi-shot mode is not selected, the terminal device processes document content in the first image, including:
the terminal equipment displays a third interface; wherein the third interface comprises one or more of: a third image, a control for document rectification, a control for saving the third image, or a control for deleting the second image; the third image is a part or all of the second image; in the third interface, the third image is overlaid on the preview image corresponding to the first image;
the terminal equipment receives the operation of selecting the control for saving the third image;
in response to the operation of selecting the control for saving the third image, the terminal device corrects the third image and saves the third image to a second application.
6. The method of claim 5, further comprising:
the terminal equipment receives the operation of selecting the control for document correction;
in response to the operation of selecting the control for document rectification, the screen size of the third image is in an editable state;
the terminal equipment receives an operation aiming at the third image;
in response to the operation on the third image, the terminal device performs document rectification on the document subjected to screen size processing in the third image.
7. The method according to claim 4, wherein the first interface further includes a control for implementing a multi-shot mode, and when the control for implementing the multi-shot mode is selected, the terminal device processes document content in the first image to obtain a second image, including:
the terminal equipment processes the document content in the first image to obtain a second image;
the method further comprises the following steps: the terminal equipment stores the second image and switches to a fourth interface; the fourth interface comprises the control for document shooting and a second preview image.
8. The method of claim 7, further comprising:
the terminal equipment receives the operation of selecting the second application;
responding to the operation of the selected second application, and displaying a fifth interface by the terminal equipment; the fifth interface comprises an identifier of the second image;
the terminal equipment receives an operation of selecting the identifier of the second image;
responding to the operation of selecting the identifier of the second image, and displaying a sixth interface by the terminal equipment; the sixth interface comprises the second image and a control for document rectification of the second image;
the terminal equipment receives the operation of selecting the control for document rectification of the second image;
in response to the operation of the selected control for document rectification of the second image, displaying a seventh interface by the terminal equipment; wherein the seventh interface comprises one or more of: a first document selection box for selecting a part or all of the documents in the second image, a control for saving the documents in the first document selection box, a control for deleting the second image, or a control for restoring the second image.
9. The method according to any of claims 4-8, characterized in that the terminal device further comprises one or more of the following: the system comprises a camera sensor, an image signal processing ISP module, an image processing module, a document identification module or an image correction module; the terminal equipment processes the document content in the first image to obtain a second image, and the method comprises the following steps:
the terminal equipment utilizes the ISP module to process image data corresponding to the document content in the first image output by the camera sensor to obtain ISP-processed image data;
the terminal equipment utilizes the image processing module to perform image post-processing on the image data processed by the ISP to obtain image data after image processing;
the terminal equipment performs document identification on the image data after the image processing by using the document identification module to obtain a target image after the document identification;
and the terminal equipment utilizes the image correction module to perform document correction on the target image after the document identification to obtain the second image.
10. The method of claim 9, wherein the ISP processed image data comprises photo stream data and preview stream data, and wherein the post-image processing comprises one or more of: image noise reduction processing, high dynamic scene HDR image fusion processing or character definition improving processing;
the terminal device performs image post-processing on the image data processed by the ISP by using the image processing module to obtain image data after image processing, and the image post-processing method includes:
the terminal equipment judges whether the preview stream data meets an HDR scene;
when the terminal equipment determines that the preview stream data meets the HDR scene, the terminal equipment utilizes the image processing module to perform HDR image fusion processing and single-frame character definition improving processing on multiple frames of photographing stream data with different exposure degrees;
when the terminal device determines that the preview stream data does not satisfy the HDR scene, the terminal device performs multi-frame image noise reduction processing and single-frame character definition improvement processing on the photographing stream data with different multi-frame noise conditions by using the image processing module.
11. The method of claim 1, further comprising a control in the first interface for automatically turning on a flash, the method further comprising:
the terminal equipment receives and selects the operation of the control for automatically starting the flash lamp;
responding to the operation of the selected control for automatically turning on the flash lamp, and displaying an eighth interface by the terminal equipment; wherein the eighth interface comprises one or more of: the control for automatically turning on the flash, the control for turning off the flash, the control for turning on the flash, or the control for normally turning on the flash, which are displayed in a highlighted form.
12. The method of claim 11, further comprising:
when the terminal device does not receive the operation of the control for turning off the flash lamp, the control for turning on the flash lamp or the control for normally lighting the flash lamp within a first time threshold value, the terminal device is switched to the first interface.
13. The method of claim 11, further comprising:
in response to the operation of selecting the control for normally lighting the flash lamp, the terminal equipment lights the flash lamp;
and responding to the operation of the control for document shooting, and acquiring the first image by the terminal equipment in a normally-on state of the flash lamp when focusing is completed.
14. The method according to claim 1, wherein the first interface further comprises a second document selection box for framing the document detected by the terminal device, and the method further comprises:
when the terminal equipment detects that the proportion of the second document selection frame in the picture is smaller than a first threshold value, the terminal equipment displays prompt information; the prompt information is used for indicating that the terminal equipment is close to the document for shooting.
15. The method of claim 1, wherein an ISP module is included in the terminal device, and the terminal device uses the motor to push the lens to a position indicated by a focusing parameter, and the method comprises:
the terminal equipment sends the effect configuration parameters to the ISP module and pushes the lens to a position indicated by the focusing parameters by using the motor; the effect configuration parameters include the focusing parameters.
16. The method of claim 15, wherein the effect configuration parameters further comprise one or more of: a camera sensor map parameter, an exposure parameter, a contrast parameter, a sharpness parameter, or a sharpness parameter.
17. The method according to claim 1, wherein a plurality of sets of corresponding relations are stored in the terminal device, wherein one set of corresponding relations is used for indicating the relation between the focusing parameters and the shooting distance; the terminal equipment pushes the lens to a position indicated by focusing parameters by using the motor, and the method comprises the following steps:
the terminal equipment determines a shooting distance between the terminal equipment and a shot document;
and the terminal equipment determines the focusing parameters corresponding to the shooting distance from the corresponding relation, and pushes the lens to the position indicated by the focusing parameters by using the motor.
18. The method of claim 17, wherein the shot distance is 30cm to 40 cm.
19. The method of claim 2, wherein the first application is a camera application.
20. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, causes the electronic device to perform the method of any of claims 1 to 19.
21. A computer-readable storage medium, in which a computer program is stored which, when executed by a processor, causes a computer to carry out the method according to any one of claims 1 to 19.
22. A computer program product, comprising a computer program which, when executed, causes a computer to perform the method of any one of claims 1 to 19.
CN202110926928.2A 2021-08-12 2021-08-12 Document shooting method, electronic device and storage medium Active CN113810604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110926928.2A CN113810604B (en) 2021-08-12 2021-08-12 Document shooting method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110926928.2A CN113810604B (en) 2021-08-12 2021-08-12 Document shooting method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113810604A true CN113810604A (en) 2021-12-17
CN113810604B CN113810604B (en) 2023-04-07

Family

ID=78893523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110926928.2A Active CN113810604B (en) 2021-08-12 2021-08-12 Document shooting method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113810604B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115526788A (en) * 2022-03-18 2022-12-27 荣耀终端有限公司 Image processing method and device
CN115623319A (en) * 2022-08-30 2023-01-17 荣耀终端有限公司 Shooting method and electronic equipment
CN116088740A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Interface processing method and device
CN117177064A (en) * 2022-05-30 2023-12-05 荣耀终端有限公司 Shooting method and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101634796A (en) * 2009-08-20 2010-01-27 上海合合信息科技发展有限公司 Camera automatic zooming method and system
US20140032406A1 (en) * 2008-01-18 2014-01-30 Mitek Systems Systems for Mobile Image Capture and Remittance Processing of Documents on a Mobile Device
CN106817533A (en) * 2015-11-27 2017-06-09 小米科技有限责任公司 Image processing method and device
CN107979727A (en) * 2017-11-30 2018-05-01 努比亚技术有限公司 A kind of document image processing method, mobile terminal and computer-readable storage medium
CN108737712A (en) * 2017-04-24 2018-11-02 中兴通讯股份有限公司 A kind of photographic method and device
CN109218603A (en) * 2018-06-14 2019-01-15 三星电子(中国)研发中心 A kind of camera control method and device
CN109559365A (en) * 2018-11-30 2019-04-02 努比亚技术有限公司 File scanning method, device, mobile terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032406A1 (en) * 2008-01-18 2014-01-30 Mitek Systems Systems for Mobile Image Capture and Remittance Processing of Documents on a Mobile Device
CN101634796A (en) * 2009-08-20 2010-01-27 上海合合信息科技发展有限公司 Camera automatic zooming method and system
CN106817533A (en) * 2015-11-27 2017-06-09 小米科技有限责任公司 Image processing method and device
CN108737712A (en) * 2017-04-24 2018-11-02 中兴通讯股份有限公司 A kind of photographic method and device
CN107979727A (en) * 2017-11-30 2018-05-01 努比亚技术有限公司 A kind of document image processing method, mobile terminal and computer-readable storage medium
CN109218603A (en) * 2018-06-14 2019-01-15 三星电子(中国)研发中心 A kind of camera control method and device
CN109559365A (en) * 2018-11-30 2019-04-02 努比亚技术有限公司 File scanning method, device, mobile terminal and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
网络视频: "MIUI文档模式", 《MIUI操作视频截图文档》 *
网络视频: "扫描全能王操作视频", 《视频截图》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115526788A (en) * 2022-03-18 2022-12-27 荣耀终端有限公司 Image processing method and device
CN116088740A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Interface processing method and device
CN116088740B (en) * 2022-05-30 2023-10-31 荣耀终端有限公司 Interface processing method and device
CN117177064A (en) * 2022-05-30 2023-12-05 荣耀终端有限公司 Shooting method and related equipment
CN115623319A (en) * 2022-08-30 2023-01-17 荣耀终端有限公司 Shooting method and electronic equipment
CN115623319B (en) * 2022-08-30 2023-11-03 荣耀终端有限公司 Shooting method and electronic equipment

Also Published As

Publication number Publication date
CN113810604B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN113810604B (en) Document shooting method, electronic device and storage medium
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN111183632A (en) Image capturing method and electronic device
CN113489894B (en) Shooting method and terminal in long-focus scene
US11949978B2 (en) Image content removal method and related apparatus
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
CN109923850B (en) Image capturing device and method
CN110430357B (en) Image shooting method and electronic equipment
CN115526787B (en) Video processing method and device
CN113364976A (en) Image display method and electronic equipment
CN115529413A (en) Shooting method and related device
CN113364975B (en) Image fusion method and electronic equipment
CN114466101B (en) Display method and electronic equipment
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN112989092A (en) Image processing method and related device
CN115802144B (en) Video shooting method and related equipment
CN114979458B (en) Image shooting method and electronic equipment
CN115022526B (en) Full depth image generation method and device
CN116939363B (en) Image processing method and electronic equipment
CN115526788A (en) Image processing method and device
CN117911299A (en) Video processing method and device
CN116546313A (en) Shooting restoration method and electronic equipment
CN116939363A (en) Image processing method and electronic equipment
CN116567394A (en) Method for sharing multimedia file, transmitting terminal equipment and receiving terminal equipment
CN117714849A (en) Image shooting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant