CN110602358B - Image acquisition method and electronic equipment - Google Patents

Image acquisition method and electronic equipment Download PDF

Info

Publication number
CN110602358B
CN110602358B CN201910803984.XA CN201910803984A CN110602358B CN 110602358 B CN110602358 B CN 110602358B CN 201910803984 A CN201910803984 A CN 201910803984A CN 110602358 B CN110602358 B CN 110602358B
Authority
CN
China
Prior art keywords
screen
input
area
sub
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910803984.XA
Other languages
Chinese (zh)
Other versions
CN110602358A (en
Inventor
董明哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910803984.XA priority Critical patent/CN110602358B/en
Publication of CN110602358A publication Critical patent/CN110602358A/en
Application granted granted Critical
Publication of CN110602358B publication Critical patent/CN110602358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides an image acquisition method and electronic equipment, relates to the technical field of communication, and aims to solve the problem that the image of a file acquired by the electronic equipment is high in distortion degree. The method comprises the following steps: receiving a first input of a user; in response to a first input, acquiring an image of a first object through at least one camera, wherein the at least one camera is positioned in a first screen area of the electronic equipment, the first object is an object in a target file, and the first object is an object positioned in the coverage area of the first screen area; and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value. The method can be applied to a scene of acquiring an image of a document.

Description

Image acquisition method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image acquisition method and electronic equipment.
Background
With the development of terminal technology, the application of electronic equipment is more and more extensive, and the requirements of users on the performance of the electronic equipment are higher and higher.
Generally, a user may trigger the electronic device to take a picture of a certain file to obtain a picture of the file, and then the electronic device may recognize information in the picture to obtain content of the file, so that the user may further edit the content of the file.
However, since the electronic device may be affected by ambient light, a user photographing technology, and the like when photographing a certain document, a photograph taken by the electronic device for the document may be distorted, so that a distortion degree of an image of the document taken by the electronic device is high.
Disclosure of Invention
The embodiment of the invention provides an image acquisition method and electronic equipment, and aims to solve the problem that the image of a file acquired by the electronic equipment is high in distortion degree.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides an image capturing method, which may be applied to an electronic device, where the method may include receiving a first input from a user; responding to a first input, and acquiring an image of a first object through at least one camera, wherein the at least one camera is positioned in a first screen area of the electronic equipment, the first object is an object in a target file, and the object is positioned in the coverage area of the first screen area, and the first object is an object positioned in the acquisition range of the at least one camera; and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value.
In a second aspect, an embodiment of the present invention provides an electronic device, including: the device comprises a receiving module and an acquisition module. The receiving module is used for receiving a first input of a user; the acquisition module is used for responding to the first input received by the receiving module, acquiring an image of a first object through at least one camera, wherein the at least one camera is positioned in a first screen area of the electronic equipment, the first object is an object in a target file, the object is positioned in a coverage range of the first screen area, and the first object is an object positioned in an acquisition range of the at least one camera; and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the image capturing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the image capturing method of the first aspect.
In the embodiment of the invention, the electronic equipment can receive a first input of a user; and in response to the first input, acquiring an image of a first object (an object in the target file within the coverage of the first screen area and an object within the acquisition range of the at least one camera) through the at least one camera (located in the first screen area of the electronic equipment); and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value. According to the scheme, when the electronic equipment acquires the image of the first object through the at least one camera positioned in the first screen area of the electronic equipment, on one hand, the first screen area can provide light for the at least one camera, so that the at least one camera can be ensured to successfully acquire the image; on the other hand, because the distance between the target file and the first screen area is smaller than or equal to the preset threshold value, that is, the first screen area and the target file are approximately attached to each other, the influence of the ambient light on the acquired image can be reduced, and thus the distortion of the image acquired by the electronic equipment is small. Therefore, the distortion degree of the image collected by the camera can be reduced on the basis of ensuring that the camera successfully collects the image.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic view of an electronic device provided with a foldable screen according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an image capturing method according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of an image acquisition method according to an embodiment of the present invention;
fig. 5 is one of schematic interfaces of an application of the image acquisition method according to the embodiment of the present invention;
fig. 6 is a second schematic interface diagram of an application of the image acquisition method according to the embodiment of the present invention;
fig. 7 is a third schematic diagram of an image acquisition method according to an embodiment of the present invention;
FIG. 8 is a fourth schematic diagram of an image capturing method according to an embodiment of the present invention;
fig. 9 is a third schematic interface diagram of an application of the image acquisition method according to the embodiment of the present invention;
FIG. 10 is a fourth schematic interface diagram of an application of the image capturing method according to the embodiment of the present invention;
FIG. 11 is a fifth schematic view of an image capturing method according to an embodiment of the present invention;
FIG. 12 is a fifth schematic view of an interface applied by the image capturing method according to the embodiment of the present invention;
fig. 13 is a sixth schematic interface diagram of an application of the image acquisition method according to the embodiment of the present invention;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 15 is a second schematic structural diagram of an electronic device according to an embodiment of the invention;
fig. 16 is a third schematic structural diagram of an electronic apparatus according to an embodiment of the invention;
fig. 17 is a fourth schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 18 is a fifth schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 19 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," etc. herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first screen region and the second screen region, etc. are for distinguishing different regions, rather than for describing a particular order of screen regions.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
Some of the nouns or terms referred to in the claims and the specification of the present application will be explained first.
Folding screen: the screen can be bent by a preset angle along a first preset direction under the action of external force. In general, the folding screen may include a flexible screen, wherein the flexible screen refers to a screen that can be arbitrarily rolled.
Single-sided screen: the screen is arranged on one side of the electronic equipment, and the screen is not arranged on the other side of the electronic equipment.
Double-sided screen: the screen is arranged on both sides of the electronic equipment. For example, screens may be provided on two opposite sides of the electronic device, respectively.
A multi-surface screen: it means that a screen is provided on both or more surfaces of the electronic device. For example, screens may be provided on three adjacent faces of the electronic device, respectively.
The acquisition range of the camera is as follows: refers to the range where a camera can capture a sharp image of an object (e.g., a person, or object). For example, assuming that the acquisition range of the camera is range a, the camera may acquire images of objects within range a, but cannot acquire images of objects outside range a.
In practical implementation, the acquisition range of the camera is reflected by a common horizontal field angle, and the smaller the horizontal field angle is, the smaller the acquisition range of the camera is; the larger the horizontal field angle, the larger the acquisition range of the camera. The horizontal field angle can be reflected by the focal length, and the larger the focal length f is, the smaller the horizontal field angle is; the smaller the focal length f, the larger the horizontal angle of view.
Screen state of the folding screen: including a folded state and an unfolded state.
The folding state refers to a state that the folding screen is folded outwards or folded inwards, and at least two areas of the folding screen are located in different planes; the unfolded state means that the respective areas of the folding screen are located in the same plane.
For example, as shown in fig. 2, assume that the foldable screen includes a first screen area 210 and a second screen area 211; then, as shown in fig. 2 (a), the foldable screen 21 is schematically shown in an unfolded state, where an angle between the first screen area 210 and the second screen area 211 is 180 °, and at this time, the first screen area 210 and the second screen area 211 are located on the same plane. Also, the folding screen 21 may be folded outward or inward. The foldable screen 21 is folded outwards, so that the first screen area 210 is folded towards the second screen area 211 along the first direction 22; the folding screen 21 is folded inward such that the first screen area 210 is folded toward the second screen area 211 along the second direction 23.
Specifically, assuming that an included angle between the first screen area 210 and the second screen area 211 is α (as shown in fig. 2), if the foldable screen is folded outwards once, the included angle between the first screen area 210 and the second screen area 211 ranges from 180 degrees to α being less than or equal to 360 degrees; if the folding screen is folded inwards once, the included angle between the first screen area 210 and the second screen area 211 is more than or equal to 0 degrees and less than 180 degrees.
Wherein the folding screen is in a partially folded state (this case is not shown in fig. 2) when 0 ° < α < 180 ° or 180 ° < α < 360 °. When α is 360 °, as shown in fig. 2 (b), the foldable screen is in a fully folded state, and at this time, the first screen region 210 and the second screen region 211 are located on opposite sides of the electronic device (i.e., turned outward); when α is 0 °, the foldable screen is also in a fully folded state, in which the first screen region 210 and the second screen region 211 are opposite to each other (i.e., folded inward, and this case is not shown in fig. 2).
It should be noted that the folded state in the embodiments of the present invention may be a fully folded state or a partially folded state, and for convenience of description, the screen of the electronic device is illustrated in a fully folded state in the following embodiments. In practical implementations, the screen of the electronic device may also be in a partially folded state.
The embodiment of the invention provides an image acquisition method and electronic equipment, wherein the electronic equipment can receive a first input of a user; and in response to the first input, acquiring an image of a first object (an object in the target file within the coverage of the first screen area and an object within the acquisition range of the at least one camera) through the at least one camera (located in the first screen area of the electronic equipment); and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value. According to the scheme, when the electronic equipment acquires the image of the first object through the at least one camera positioned in the first screen area of the electronic equipment, on one hand, the first screen area can provide light for the at least one camera, so that the at least one camera can be ensured to successfully acquire the image; on the other hand, because the distance between the target file and the first screen area is smaller than or equal to the preset threshold value, that is, the first screen area and the target file are approximately attached to each other, the influence of the ambient light on the acquired image can be reduced, and thus the distortion of the image acquired by the electronic equipment is small. Therefore, the distortion degree of the image collected by the camera can be reduced on the basis of ensuring that the camera successfully collects the image.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image acquisition method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image acquisition method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image acquisition method may operate based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can implement the image acquisition method provided by the embodiment of the invention by running the software program in the android operating system.
The electronic equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
An execution subject of the image acquisition method provided by the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the image acquisition method in the electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily explain an image acquisition method provided by the embodiment of the present invention.
In the embodiment of the invention, when a user needs to acquire an image of a certain file (for example, a paper file or an electronic file), the user can trigger the electronic device to acquire the image of the first object through at least one camera positioned in a first screen area of the electronic device, so as to reduce the influence of ambient light, and thus ensure that the image acquired by the electronic device through the at least one camera is not distorted. Specifically, if the user needs to capture an image of content a (e.g., a first object in the embodiment of the present invention) in file 1, the user may perform an input (e.g., a first input in the embodiment of the present invention) on the electronic device, and bring a screen area (e.g., a first screen area in the embodiment of the present invention) of the electronic device close to file 1, and keep a distance between the screen area and file 1 smaller than or equal to a preset distance, so as to trigger the electronic device to capture the image of content a through at least one camera located in a coverage area of the screen area. According to the scheme, when the electronic equipment acquires the image of the target file through the first screen area, on one hand, the first screen area can provide light for the electronic equipment, so that the at least one camera can be ensured to acquire the image successfully; on the other hand, because the distance between the first screen area and the target file is less than or equal to the preset threshold value, that is, the first screen area and the target file are approximately attached to each other, the influence of the ambient light on the acquired image can be reduced, and thus the distortion of the image acquired by the electronic device is small. Therefore, the distortion degree of the image collected by the camera can be reduced on the basis of ensuring that the camera successfully collects the image.
The following describes an exemplary image capturing method provided by an embodiment of the present invention with reference to the drawings.
As shown in fig. 3, an embodiment of the present invention provides an image acquisition method, which may include steps 201 and 202 described below.
Step 201, the electronic device receives a first input of a user.
Optionally, in the embodiment of the present invention, the electronic device may include one screen, or may include a plurality of screens.
The first input may be an input of a user on a certain screen area (for example, a second screen area described below) on the screen of the terminal device.
Optionally, in the embodiment of the present invention, the first input may be any possible form of input, such as click input, long-press input, re-press input, drag input, slide input, and the like, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
The click input may be input of a single click, a double click or a continuous click for a preset number of times. The long press input may be an input contacting for a first preset duration. The above-mentioned heavy-pressing input is also referred to as a pressure touch input, and refers to an input in which a user presses a pressure value greater than or equal to a pressure threshold value. The drag input may be an input of dragging in an arbitrary direction. The slide input may be a slide input that slides in any direction.
In the embodiment of the present invention, the preset times, the first preset duration, and the pressure threshold may all be determined according to actual use requirements, and the embodiment of the present invention is not limited.
Step 202, the electronic device acquires an image of a first object through at least one camera in response to a first input.
The at least one camera may be located in a first screen area of the electronic device.
Optionally, in the embodiment of the present invention, the at least one camera may be a sub-screen camera, that is, the at least one camera may be a sub-screen camera disposed below the first screen area. The camera below the first screen area refers to a position relation of the at least one camera relative to the first screen when the screen of the electronic equipment faces the user.
The at least one camera is located in the first screen area of the electronic device, and it can be understood that the at least one camera is located below the screen of the electronic device, and an orthographic projection of the at least one camera on the screen of the electronic device is located in the first screen area.
Optionally, in this embodiment of the present invention, the first object may be an object in the target file, which is located within a coverage area of the first screen area, and the first object is an object located within a collection area of the at least one camera.
The first object is an object in the target file, which is located within the coverage area of the first screen area, and may be understood as that the first object is an object in the target file corresponding to the first screen area, that is, the first object is an object in the target file corresponding to an orthographic projection of the first screen area on the target file.
For the description of the acquisition range of at least one camera, reference may be made to the description of the acquisition range of the camera in the above noun explanation section, which is not described herein again.
Optionally, in this embodiment of the present invention, a distance between the target file and the first screen region may be smaller than or equal to a preset threshold, that is, a distance range between the target file and the first screen region may be [0, a preset threshold ].
Optionally, in the embodiment of the present invention, the preset threshold may be any possible value such as 1mm, 2mm, or 3mm, and may be specifically determined according to an actual use requirement, and the embodiment of the present invention is not limited.
It can be understood that, in one case, when the distance between the target file and the first screen area is greater than 0 and less than or equal to the preset threshold, the target file is approximately attached to the first screen area; in another case, when the distance between the target file and the first screen area is 0, the target file completely fits the first screen area. Under the two conditions, a small amount of ambient light or no ambient light enters between the target file and the first screen area, so that the influence of the ambient light on the image collected by the electronic equipment can be avoided, the distortion of the image collected by the electronic equipment is reduced, the quality of the image collected by the electronic equipment is improved, and the identification accuracy of the electronic equipment on the content in the image is improved.
Optionally, in the embodiment of the present invention, the first screen area may be a whole area in one screen of the electronic device, or may also be a partial area in one screen of the electronic device, which may be determined specifically according to an actual use requirement, and the embodiment of the present invention is not limited.
Optionally, the first screen area and the second screen area may be areas in one screen of the electronic device, or the first screen area and the second screen area may be areas in different screens of the electronic device, which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
Further, when the first screen area and the second screen area are areas in one screen of the electronic device, the first screen area and the second screen area may be the same area in one screen or different areas in one screen, which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
Optionally, in an embodiment of the present invention, the electronic device may control the first screen area to emit light, and the light emitted by the first screen area may be irradiated on the first object and may be reflected back to the first screen area by the target file, so that the at least one camera may collect reflected light emitted by the first screen area and reflected back to the first screen area by the target file, and the electronic device may successfully collect the first image according to the reflected light collected by the at least one camera.
In the embodiment of the invention, the electronic equipment can acquire the image of the file through the camera positioned below the screen of the electronic equipment and can provide light for the file by means of the screen, so that the camera can acquire the reflected light which is emitted by the screen and reflected by the file, and the electronic equipment can be ensured to successfully acquire the image.
Optionally, in the embodiment of the present invention, after the electronic device acquires the first image through the at least one camera, the electronic device may prompt the user in a target prompting manner that the image acquisition is completed. The target prompting mode may be a vibration prompting mode, or may be a mode of displaying first prompting information (for example, any possible information such as "completed scanning") in another screen area (for example, a second screen area described below), which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
In the image acquisition method provided by the embodiment of the invention, when the electronic equipment acquires the image of the first object through the at least one camera positioned in the first screen area of the electronic equipment, on one hand, the first screen area can provide light for the at least one camera, so that the at least one camera can be ensured to successfully acquire the image; on the other hand, because the distance between the target file and the first screen area is smaller than or equal to the preset threshold value, namely the first screen area is approximately attached to the target file, the influence of ambient light on the acquired image can be reduced, and thus the distortion degree of the image acquired by the electronic equipment through the at least one camera is smaller. Therefore, the distortion degree of the image collected by the camera can be reduced on the basis of ensuring that the camera successfully collects the image.
Furthermore, in the embodiment of the present invention, because the distortion degree of the image (for example, the image of the first object) acquired by the electronic device through the at least one camera is low, the electronic device may accurately extract the content in the image, and the matching degree between the content and the content in the original file (for example, the target file) is high.
Optionally, in the embodiment of the present invention, the light emitted by the electronic device to control the first screen area may be light of any possible color and any possible brightness, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For example, in the embodiment of the present invention, before the step 202, the embodiment of the present invention may further include a step 203 described below.
Step 203, the electronic device controls the first screen area to emit preset cold color light.
After the electronic device controls the first screen area to emit the preset cold color light, the color of the first screen area may be the preset color, and the brightness of the first screen area may be the first preset brightness, that is, the electronic device may control the first screen area to emit the light with the preset color and the first preset brightness.
Optionally, in the embodiment of the present invention, the preset color may be any possible cold color light, such as blue, blue-cyan, blue-violet, and blue-green, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The first preset brightness may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
It should be noted that the preset cool light in the embodiment of the present invention may be pure color light, that is, the preset color may be pure color, such as the above blue-cyan color, or the above blue-violet color.
In the embodiment of the invention, on one hand, because the cold color light is the pure color light, the intensity of the reflected light of the pure color light reflected back to the first screen area through the target file is approximately equal, so that the reflected light collected by each camera in at least one camera is also approximately equal, and the first image collected by the electronic equipment is further ensured to be more accurate. On the other hand, since the ambient light is mostly warm color light, and when the warm color light and the cold color light are irradiated on the same object (e.g., a target folder), the reflectivity of the object to the warm color light and the reflectivity to the cold color light are different, therefore, the embodiment of the invention can avoid the influence of the ambient light on the first image acquisition process by using the cold color light. Therefore, the quality of the first image acquired by the electronic equipment can be guaranteed to be good.
Optionally, in this embodiment of the present invention, the first input may be an input of a user to the acquisition control. Specifically, before the user performs the first input, the user may first trigger the electronic device to display the capture control in the second screen area. The second screen area and the first screen area may be the same area in the screen of the terminal device, or may also be different areas in the screen of the terminal device, and may be determined specifically according to actual use requirements, which is not limited in the embodiment of the present invention.
Illustratively, in conjunction with fig. 3, as shown in fig. 4, before step 201, the image capturing method provided by the embodiment of the present invention may further include step 204 and step 205 described below.
And step 204, the electronic equipment receives a second input of the user.
The second input may be an input of a target identifier by a user, where the target identifier may be used to indicate a target application. The target identifier may be an identifier displayed in a desktop of the electronic device (i.e., an application icon of an application), or may be an identifier displayed in an interface of an application installed in the electronic device. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
For example, the target application may be a document scanning application, the user may trigger the electronic device to run the document scanning application by inputting the target identifier, and after the electronic device runs the document scanning application, the electronic device may enter a document scanning preparation state.
Optionally, in the embodiment of the present invention, the second input may be any possible form of input, such as click input, long-press input, re-press input, drag input, slide input, and the like, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For the description of the input form of the second input, reference may be specifically made to the description of the first input in step 201, and details are not described here again.
Step 205, the electronic device responds to the second input, and displays a collection control in a second screen area of the electronic device.
Optionally, in this embodiment of the present invention, after the electronic device responds to the second input, the first interface may be displayed in the second screen area, and the collection control is displayed in the first interface. The first interface may be an interface of the target application program.
Optionally, in this embodiment of the present invention, the acquisition control may include at least one of a trigger control and a selection control.
The trigger control may be configured to trigger capturing of an image, and the selection control may be configured to execute any one of the following: determining the coverage, determining the coverage and triggering the acquisition of the image. It can be understood that the first input may be an input of a user to the trigger control, or an input of a user to the selection control, which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
In the embodiment of the invention, the acquisition control can be in various forms, so that the diversity of the display acquisition control can be improved, and the flexibility of the image acquisition method provided by the embodiment of the invention can be improved.
Optionally, in this embodiment of the present invention, the selection control may be a selection area displayed with a second preset brightness in a second screen area of the electronic device.
For example, as shown in fig. 5, assuming that the second input is a click input of the target identifier by the user, the user may click on the target identifier, that is, the electronic device receives the second input of the user, and then the electronic device may display the capture control in the second screen area of the electronic device in response to the second input; if the acquisition control comprises a trigger control, the electronic device may display a trigger control 41 in a second screen area 40 of the electronic device, as shown in fig. 5 (a); if the acquisition control comprises a selection control, the electronic device can display a selection control 42 in a second screen area 40 of the electronic device, as shown in fig. 5 (b); if the acquisition controls include a trigger control and a selection control, the electronic device may display the trigger control 41 and the selection control 42 in a second screen area 40 of the electronic device, as shown in fig. 5 (c).
Optionally, in the embodiment of the present invention, after the electronic device displays the collection control in the second screen area, the user may further trigger the electronic device to cancel displaying the collection control in the second screen area, so that the electronic device may exit from document scanning.
In the embodiment of the invention, the user can trigger the electronic equipment to acquire the image through the at least one camera by inputting the acquisition control, so that the operation convenience and flexibility of the electronic equipment can be improved.
Optionally, in this embodiment of the present invention, when the acquisition control includes a trigger control and a selection control, the electronic device responds to the second input, and may respectively display the trigger control and the selection control (for example, the trigger control may be displayed first and then the selection control is displayed), or the electronic device responds to the second input, and simultaneously displays the trigger control and the selection control, which may be determined specifically according to actual use requirements, and this embodiment of the present invention is not limited.
Specifically, when the electronic device respectively displays the trigger control and the selection control, the second input may include a first sub-input and a second sub-input, the first sub-input may be used to trigger the electronic device to display the trigger control in a second screen area of the electronic device, and the second sub-input may be used to display the selection control in the second screen area of the electronic device.
For example, in the embodiment of the present invention, the step 205 may be specifically implemented by the following steps 205a and 205 b.
Step 205a, the electronic device responds to the first sub-input, and displays a trigger control in a second screen area of the electronic device.
The first sub-input may be an input of the target identifier by a user.
Optionally, in the embodiment of the present invention, the first sub-input may be any possible form of input, such as click input, long-press input, re-press input, drag input, and the like, and may be specifically determined according to actual use requirements, which is not limited in the embodiment of the present invention.
For the description of the input form of the first sub-input, reference may be specifically made to the description of the first input in step 201, and details are not described herein again.
For other descriptions in the step 205a, reference may be specifically made to the relevant descriptions in the step 204 and the step 205, and details are not described here again.
And step 205b, the electronic equipment responds to the second sub-input and displays a selection control in a second screen area of the electronic equipment.
Optionally, in the embodiment of the present invention, the second sub-input may be any possible form of input, such as a click input, a long-press input, a re-press input, a drag input, and the like, of the user on the second screen area, and may be determined specifically according to an actual use requirement, which is not limited in the embodiment of the present invention.
For the description of the input form of the second sub-input, reference may be specifically made to the description of the first input in step 201, and details are not described here again.
Optionally, in this embodiment of the present invention, the selection control may be determined according to the second sub-input. For example, when the second sub-input is a click input of the user on the second screen region, the selection control may be an enclosed region formed for a click position corresponding to the click input of the user.
For example, as shown in (a) of fig. 6, if the user makes four click inputs on the second screen area 50, the electronic device may display a selection control 51 at a click position corresponding to the four click inputs in response to the second sub-input, as shown in (b) of fig. 6. It can be understood that, in this embodiment, the selection control 51 is an enclosed area a formed by the click positions corresponding to the four-click input.
In the embodiment of the invention, the acquisition control comprises the trigger control and the selection control, so that a user can trigger the electronic equipment to execute different operations through inputting different controls, and the flexibility of the user in operating the electronic equipment can be improved. And under the condition that the acquisition control comprises the trigger control and the selection control, the user can respectively trigger the electronic equipment to respectively display the trigger control and the selection control through the two sub-inputs, so that the flexibility of the electronic equipment to display the control can be improved.
Optionally, in this embodiment of the present invention, the selection control may be used to update the coverage area of the first screen area. Wherein, the user may trigger the electronic device to update the coverage of the first screen region through an input (e.g., a third sub-input described below) to the selection control. Specifically, the method for triggering the electronic device to update the coverage of the first screen area through the input of the selection control by the user will be described in detail in the following embodiments, and details are not repeated here.
Furthermore, in the embodiment of the present invention, since the selection control may be used to determine the coverage area of the first screen region, after the user triggers the electronic device to display the selection control through the second sub-input, the user may adjust the parameter of the selection control through an input to the selection control, so that the electronic device may correspondingly adjust the parameter of the first screen region according to the parameter of the selection control to update the coverage area of the first screen region, so that the coverage area of the first screen region may adapt to the parameter of the acquisition object (e.g., the target file), and further, the quality of the image acquired by the electronic device through the first screen region may be further improved.
Optionally, in the embodiment of the present invention, before the user triggers the electronic device to acquire the first image through the at least one camera, the electronic device may be triggered to update the coverage of the first screen area, and after the coverage of the first screen area is updated, the electronic device is triggered to acquire the image of the first object through the at least one camera in the first screen area after the coverage is updated.
It will be appreciated that in this case, the first input may include a third sub-input and a fourth sub-input, wherein the third sub-input may be an input by a user to the selection control, and the fourth sub-input may be an input by a user to the selection control or the trigger control.
Illustratively, in the image capturing method provided by the embodiment of the present invention, the step 202 may be specifically implemented by the following step 202a and step 202 b.
Step 202a, the electronic device responds to the third sub-input and updates the coverage of the first screen area.
Step 202b, the electronic device responds to the fourth sub-input, and acquires the first image through the first screen area after the coverage area is updated.
It should be noted that, in the embodiment of the present invention, the electronic device may update the coverage of the first screen area by adjusting a parameter of the first screen area. The parameter of the first screen region may include at least one of an area of the first screen region, a shape of the first screen region, and a position of the first screen region.
Specifically, in the embodiment of the present invention, the user may trigger the electronic device to adjust the parameter of the selection control according to the input parameter of the third sub-input by inputting the third sub-input to the selection control, so as to adjust the parameter of the first screen area, and update the coverage area of the first screen area.
Wherein the input parameters of the second input may include at least one of: the input position of the second sub-input, the input times of the second sub-input and the input area of the second sub-input; the parameters of the selection control may include at least one of: the area of the selection control, the shape of the selection control, and the display position of the selection control.
It is to be appreciated that the type of the parameter of the selection control adjusted by the electronic device is the same as the type of the parameter of the first screen region adjusted. For example, when the user triggers the electronic device to adjust the area of the selection control, the electronic device may adjust the area of the first screen region; alternatively, the electronic device may adjust the position of the first screen region when the user triggers the electronic device to adjust the display position of the selection control.
Optionally, in this embodiment of the present invention, the parameter of the selection control and the parameter of the first screen area may be changed according to a first preset ratio. The first preset proportion can be any value larger than 0.
For example, taking the parameter of the selection control as the area of the selection control, the parameter of the first screen region as the area of the first screen region, and the first preset ratio as M as an example, when the area of the selection control is increased/decreased to M times of the original area (i.e., the area before adjustment), the area of the first screen region is also increased/decreased to M times of the original area.
Optionally, in the embodiment of the present invention, the third sub-input and the fourth sub-input may be any possible form of input, such as click input, long-press input, heavy-press input, drag input, slide input, and the like, and may be determined specifically according to actual use requirements, which is not limited in the embodiment of the present invention.
For other descriptions of the input forms of the third sub-input and the fourth sub-input, reference may be specifically made to the description of the input form of the first input in step 201, and details are not described here again.
For example, taking the electronic device to adjust the area of the first screen region (that is, the parameter of the first screen region is the area of the first screen region) as an example, assuming that the third sub-input is a dragging input of the user to the selection control, the third sub-input may be an input that the user presses any one frame of the selection control and drags the frame to move in the second preset direction; at this time, the positions of other borders of the selection control, which are not connected with the borders, are kept unchanged, so that the area and the shape of the selection control can be changed, and the area and the shape of the first screen area can be adjusted to update the coverage of the first screen area.
Optionally, in this embodiment of the present invention, the electronic device may display second prompt information in the second screen region to prompt the user to adjust the area of the first screen region, where the second prompt information may be a text prompt information such as "please select a scanning region" displayed in the second screen region, or any possible prompt information such as an animation prompt displayed in the second screen region, and may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited.
It should be noted that, if the user does not execute the third sub-input but directly executes the fourth sub-input, the parameter of the first screen area may be a preset parameter, that is, in this case, the coverage area of the second screen area is a preset coverage area, where the preset coverage area is a range corresponding to the preset parameter. For example, taking the preset parameter as the preset area of the first screen region as an example, the preset parameter may be the minimum area of the first screen region, and may also be the maximum area of the first screen region, which may be determined according to the actual use requirement, and the embodiment of the present invention is not limited.
In the embodiment of the invention, the user can trigger the electronic device to adjust the parameters of the first screen region according to the parameters of the selection control through the third sub-input to the selection control so as to update the coverage range of the first screen region, that is, the coverage range of the first screen region can flexibly adapt to the size of the acquired object, so that the electronic device can acquire the image of the object located in the coverage range of the first screen region in the target file more accurately, and the flexibility and the accuracy of the electronic device for acquiring the image through at least one camera in the first screen region are improved.
Optionally, in this embodiment of the present invention, before the electronic device receives the first input of the user, the electronic device may determine the second screen area. When the electronic device comprises the first target screen and the first target screen is a folding screen, the screen states of the first target screen are different, and the second screen area determined by the electronic device is different, that is, the electronic device can determine the second screen area according to the screen state of the first target screen.
Illustratively, in conjunction with fig. 2, as shown in fig. 7, before step 201, the image capturing method provided by the embodiment of the present invention may further include step 206 described below.
And step 206, the electronic equipment determines a second screen area according to the screen state of the first target screen.
The screen state of the first target screen can be a folded state or an unfolded state.
For the related description of the folded state and the unfolded state, reference may be made to the related description of the folded state and the unfolded state in the above explanation part, and details are not described herein.
Optionally, in the embodiment of the present invention, the first target screen may be a single-sided screen or a multi-sided screen, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
Optionally, in the embodiment of the present invention, in a process that the electronic device acquires the image of the first object through the at least one camera, that is, in a process that the electronic device executes the step 202, even if the screen state of the first target screen changes, the electronic device may continue to acquire the image of the first object through the at least one camera in the corresponding first screen area before the screen state changes.
It should be noted that, in an actual implementation of the embodiment of the present invention, the electronic device may execute step 206 after executing step 204 and before executing step 205.
In the embodiment of the invention, the electronic equipment can automatically determine the second screen area (which can be used for displaying the acquisition control) without manually triggering and determining by the user before the user executes the first input according to the screen state of the first target screen, so that the operation process of the user can be simplified, and the man-machine interaction performance can be improved.
Optionally, in the embodiment of the present invention, the first target screen has different screen types (for example, the first target screen may be a single-sided screen or a multi-sided screen), and the second screen area determined by the electronic device according to the screen state of the first target screen may also be different.
Optionally, in a possible implementation manner, when the first target screen is a single-sided screen, as shown in fig. 8 in combination with fig. 7, the step 206 may be specifically implemented by the following steps 206a to 206 c.
In step 206a, the electronic device determines a screen state of the first target screen.
Optionally, in this embodiment of the present invention, the first target screen may include a plurality of screen regions, and the second screen region and the first screen region may be screen regions in the plurality of screen regions.
Optionally, the electronic device may determine the screen state of the first target screen according to an included angle between any two screen regions in the plurality of screen regions.
Specifically, as shown in fig. 2 (a), when the included angle between any two of the plurality of screen regions is 180 °, the electronic device may determine that the first target screen is in the unfolded state, and when the included angle between any two of the plurality of screen regions is greater than 0 ° and less than 180 °, or greater than 180 °, the electronic device may determine that the first target screen is in the folded state, where the folded state includes a partially folded state and a fully folded state; for the description of the partially folded state and the fully folded state, reference may be made to the description related to the above noun explanation part, which is not repeated herein.
In the embodiment of the present invention, when the screen state of the first target screen is a folded state, the first target screen may include a first sub-screen and a second sub-screen, and it can be understood that the first sub-screen and the second sub-screen are located in different planes.
It should be noted that, in the embodiment of the present invention, if the electronic device determines that the screen state of the first target screen is the folded state, the electronic device may continue to perform the following step 206 b; if the electronic device determines that the screen status of the first target screen is the expanded status, the electronic device may continue to perform step 206 c.
In the embodiment of the present invention, the screen state of the first target screen is a folded state, which may be a fully folded state and a partially folded state in which the first target screen is folded outward, and may be determined specifically according to an actual use requirement, which is not limited in the embodiment of the present invention. The screen state of the first target screen may be an unfolded state, where the screen regions of the first target screen are all located in the same plane, that is, an included angle between the screen regions is 180 °.
Step 206b, the electronic device determines at least a partial area in the first sub-screen as a second screen area.
In this embodiment, the first screen region may be at least a partial region of the second sub-screen of the first target screen, and an included angle between the first sub-screen and the second sub-screen is greater than a first preset angle.
In this embodiment, the first screen area may be a partial area in the second sub-screen, or may be a whole area in the second sub-screen; the second screen area may be a partial area in the first sub-screen or may be an entire area in the first sub-screen. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The first preset angle may be any angle which is larger than 0 ° and can realize image acquisition, and may be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
Optionally, in the embodiment of the present invention, areas of the first sub-screen and the second sub-screen may be the same (for example, the first target screen is folded symmetrically), or may be different (for example, the first target screen is folded asymmetrically), which may be determined specifically according to an actual use requirement, and the embodiment of the present invention is not limited.
Illustratively, as shown in fig. 9, fig. 9 is a schematic view of the first target screen in a fully folded state. Fig. 9 (a) is a front sectional view of the electronic device, fig. 9 (b) is a top view of the electronic device, and fig. 9 (c) is a bottom view of the electronic device. The first sub-screen of the first target screen may be 90 shown in (a) of fig. 9, the second sub-screen of the first target screen may be 91 shown in (a) of fig. 9, the second screen region may be 92 shown in 90 shown in (b) of fig. 9, and the first screen region may be 93 shown in 91 shown in (c) of fig. 9.
It is to be understood that the first sub-screen may be an area of the first target screen opposite to (i.e., facing) the user, and the second sub-screen may be an area of the first target screen opposite to (i.e., facing) the target file.
Step 206c, the electronic device determines a second target screen of the electronic device as a second screen area.
The first screen area may be an area in the first target screen. It is to be understood that the first screen region may be the entire region in the first target screen, or may be a partial region in the first target screen, that is, the first screen region may be at least a partial region in the first target screen. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
It should be noted that the second target screen may be a screen other than the first target screen in the screen of the electronic device.
Optionally, in this embodiment of the present invention, the second target screen may be a small screen located on a different surface from the first target screen in the screen of the electronic device. The small screen can be used for displaying content under the condition that the first target screen is turned off, so that the power consumption of the electronic equipment is saved, and the endurance time of the electronic equipment is prolonged.
Illustratively, as shown in fig. 10, fig. 10 is a schematic diagram of the first target screen in an unfolded state. Fig. 10 (a) is a top view of the electronic device, and fig. 10 (b) is a bottom view of the electronic device. The second target screen may be 94 shown in (a) of fig. 10, and the second screen region may be a region 95 in 94 shown in (a) of fig. 10; the first target screen may be 96 shown in (b) of fig. 10, and the first screen region may be region 97 in 96 shown in (b) of fig. 10.
It is to be understood that the second target screen may be a screen opposite to (i.e., facing) a user in a screen of the electronic device, and the first target screen may be an area opposite to (i.e., facing) a target file in the screen of the electronic device.
For other descriptions in the step 206b and the step 206c, reference may be specifically made to the relevant description in the step 206, and details are not described here again.
In the embodiment of the present invention, when the first target screen of the electronic device is a single-sided screen, the electronic device may determine the second screen area according to the screen state (i.e., the folded state or the unfolded state) of the first target screen, and after the second screen area is determined, the first screen area is also determined accordingly, so that the electronic device may determine the appropriate first screen area and second screen area, that is, both the first screen area and the second screen area may be exposed outside, thereby facilitating the user to operate and view the contents of the first screen area and the second screen area.
Optionally, in another possible implementation manner, when the first target screen is a double-sided screen, as shown in fig. 11 in combination with fig. 7, the step 206 may be specifically implemented by the following steps 206d to 206 f.
Step 206d, the electronic device determines the screen state of the first target screen.
Optionally, in an embodiment of the present invention, the first target screen may include a first screen and a second screen, and both the first screen and the second screen are foldable screens.
It can be understood that, in the embodiment of the present invention, the electronic device determining the screen state of the first target screen may be understood as the electronic device determining the screen state of the first screen and the screen state of the second screen.
It should be noted that, in the embodiment of the present invention, if the electronic device determines that the screen state of the first target screen is the folded state, the electronic device may continue to execute the following step 206 e; if the electronic device determines that the screen status of the first target screen is the expanded status, the electronic device may continue to perform step 206 f.
When the first screen and the second screen are both folded, the first screen may include a third sub-screen and a fourth sub-screen, and the third sub-screen and the fourth sub-screen are located on different planes.
For the description that the first target screen is in the folded state and the first target screen is in the unfolded state, reference may be specifically made to the related description that the first target screen is in the folded state and the first target screen is in the unfolded state in step 206a, and details are not described here again.
Step 206e, the electronic device determines at least a partial area in the third sub-screen as the second screen area.
The first screen area may be at least a partial area of the fourth sub-screen. In the embodiment of the invention, the included angle between the third sub-screen and the fourth sub-screen is larger than a second preset angle.
The second preset angle may be any angle which is larger than 0 ° and can realize image acquisition, and may be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
Exemplarily, as shown in fig. 12, fig. 12 is a schematic diagram of the first target screen in a folded state, where (a) in fig. 12 is a front sectional view of the electronic device, (b) in fig. 12 is a top view of the electronic device, and (c) in fig. 12 is a bottom view of the electronic device. The first screen may be 120 shown in (a) of fig. 12, the second screen may be 121 shown in (b) of fig. 12, the third sub-screen of the first screen may be 122 shown in (a) of fig. 12, and the fourth sub-screen of the first screen may be 123 shown in (a) of fig. 12; the second screen area may be an area 124 in 122 shown in (b) of fig. 12, that is, the electronic device takes at least a partial area in the third sub-screen as the second screen area, and the first screen area may be an area 125 in 123 shown in (c) of fig. 12, that is, the first screen area is at least a partial area in the fourth sub-screen.
In step 206f, the electronic device determines at least a partial area in the first screen as a second screen area.
The first screen area may be at least a partial area of the second screen. That is, the first screen region may be a partial region in the second screen or may be a whole region in the second screen. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Exemplarily, as shown in fig. 13, fig. 13 is a schematic diagram of the first target screen in an unfolded state, where (a) in fig. 13 is a cross-sectional view of the electronic device, (b) in fig. 13 is a top view of the electronic device, and (c) in fig. 13 is a bottom view of the electronic device. The first screen may be 130 shown in (a) of fig. 13, the second screen may be 131 shown in (a) of fig. 13, the second screen region may be 132 shown in 130 shown in (b) of fig. 13, and the first screen region may be 133 shown in 131 shown in (c) of fig. 13.
It is to be understood that the first screen may be a region facing the user in the first target screen, and the second screen may be a region facing the target file (or the first object) in the first target screen.
For other descriptions in the step 206e and the step 206f, reference may be specifically made to the relevant description in the step 206, and details are not described here again.
In the embodiment of the present invention, when the first target screen of the electronic device is a dual-sided screen, because the electronic device may determine the second screen region according to the screen state (i.e., the folded state or the unfolded state) of the first target screen, and after the second screen region is determined, the first screen region is also determined accordingly, the electronic device may determine the appropriate first screen region and second screen region, that is, the first screen region and the second screen region may be exposed in the environment, so that a user may operate and view the contents displayed by the first screen region and the second screen region, and the user may trigger the electronic device to capture an image through at least one camera in the first screen region.
Optionally, in an embodiment of the present invention, in another possible implementation manner (that is, the first target screen is a double-sided screen), the first screen may include N first sub-areas, the second screen may include N second sub-areas, and one first sub-area may correspond to one second sub-area, and each first sub-area displays the acquisition control.
When the second screen region is a region in the first screen and the first screen region is a region in the second screen, the input of the acquisition control in one first sub-region by the user may be used to update the coverage of the first screen region in the second sub-region corresponding to the one first sub-region, where N is an integer greater than 0. In this way, in the case that the second screen region is a region in the first screen and the first screen region is a region in the second screen, the input of the user in one first sub-region may be used to update the coverage of the first screen region in the second sub-region corresponding to one first sub-region.
For the description about updating the coverage of the first screen region in the second sub-region corresponding to one first sub-region, reference may be specifically made to the description about updating the coverage of the first screen region in the foregoing embodiment, and details are not described here again.
In the embodiment of the invention, when the first target screen is a double-sided screen, the electronic device can update the coverage range of the first screen area in the second sub-area corresponding to any one of the first sub-areas according to the input of the user in the first sub-area in the first screen, so that the user can flexibly adjust the coverage range of the second sub-area through the input of the first sub-area, and the electronic device can more accurately acquire the image of the target file.
Optionally, in the embodiment of the present invention, before the electronic device acquires the image of the first object through the at least one camera, the electronic device may determine the first screen area according to the first input; and then, acquiring an image of an object (such as a first object) within an acquisition range of at least one camera by at least one camera in the first screen area.
Optionally, in this embodiment of the present invention, when the first input is a touch input, the electronic device may determine the first screen area based on at least one touch position of the first input. Specifically, in this case, the first screen area is an area corresponding to a closed area formed by the at least one touch position in the screen of the electronic device.
Optionally, in this embodiment of the present invention, when the first input is a sliding input, the electronic device may determine the first screen area based on a sliding track of the first input. Specifically, in this case, the first screen area is an area corresponding to a closed area formed by the sliding track in the screen of the electronic device.
Optionally, in this embodiment of the present invention, when the first input is associated with the preset pattern, the electronic device may determine, based on an input parameter of the first input, a target area of the preset pattern, and determine the target area of the preset pattern as the first screen area.
Wherein the first input is associated with a preset pattern may be understood as the input parameters of the first input are associated with the preset pattern. The input parameters of the first input may be at least one of: touch position and input duration.
For example, the touch position of the first input may be associated with a position of a preset pattern, and the input duration of the first input may be associated with a size of the preset pattern.
Optionally, in the embodiment of the present invention, the electronic device may specifically determine the target area of the preset pattern, and the electronic device determines a center position of the preset pattern on the screen of the terminal device according to the first input touch position; and adjusting the size of the preset pattern (i.e. enlarging the size of the preset pattern or reducing the size of the preset pattern) in a second preset proportion according to the input duration of the first input by taking the determined central position as the center. The second preset ratio may be any value greater than 0.
Optionally, in the embodiment of the present invention, the preset pattern may be a pattern with any shape, for example, the preset pattern may be a circular pattern, a triangular pattern, an elliptical pattern, or the like, which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
It is understood that the terminal device may directly capture an image through the at least one camera in the first screen area after determining the first screen area without the user triggering the terminal device to capture the image again. Therefore, the rapidness of image acquisition through at least one camera can be increased, and the man-machine interaction performance is improved.
In the embodiment of the present invention, the image capturing methods shown in the above method drawings are all exemplarily described by combining one drawing in the embodiment of the present invention. In specific implementation, the image acquisition methods shown in the above method drawings may also be implemented by combining any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
As shown in fig. 14, an embodiment of the present invention provides an electronic device 140, and the electronic device 140 may include a receiving module 141 and an acquisition module 142. A receiving module 141, which may be used to receive a first input of a user; an acquiring module 142, which may be configured to acquire, in response to the first input received by the receiving module 141, an image of a first object through at least one camera, where the at least one camera is located in a first screen area of the electronic device, the first object may be an object in the target file that is located within a coverage of the first screen area, and the first object may be an object located within an acquisition range of the at least one camera; and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value.
Optionally, in the embodiment of the present invention, in combination with fig. 14, as shown in fig. 15, the electronic device may further include a display module 143. The receiving module 141 may be further configured to receive a second input from the user before receiving the first input from the user; a display module 143, which may be configured to display an acquisition control in a second screen area of the electronic device in response to the second input received by the receiving module 141; and the first input is input of the user to the acquisition control.
Optionally, in this embodiment of the present invention, the acquisition control may include at least one of a trigger control and a selection control. The trigger control may be used to trigger acquisition of an image, and the selection control may be used to perform any of: determining a coverage range, determining the coverage range and triggering the acquisition of the image.
Optionally, in this embodiment of the present invention, the acquisition control includes a selection control and a trigger control, and the second input may include a first sub-input and a second sub-input. The display module 143 may be specifically configured to display the trigger control in the second screen area in response to the first sub-input; and displaying the selection control in a second screen area in response to the second sub-input.
Optionally, in an embodiment of the present invention, the acquisition control includes a selection control, or includes a selection control and a trigger control, the first input may include a third sub-input and a fourth sub-input, the third sub-input may be an input to the selection control, and the fourth sub-input may be an input to the selection control or the trigger control. The acquisition module 142 may be specifically configured to update the coverage of the first screen area in response to the third sub-input; and in response to the fourth sub-input, acquiring an image of the first object through at least one camera in the first screen area after the coverage is updated.
Optionally, in an embodiment of the present invention, with reference to fig. 14, as shown in fig. 16, the electronic device may include a first target screen, and the first target screen may be a foldable screen. The electronic device may also include a determination module 144. The determining module 144 may be configured to determine the second screen region according to a screen state of the first target screen before the receiving module 141 receives the first input of the user, where the screen state of the first target screen is a folded state or an unfolded state.
Optionally, in the embodiment of the present invention, the first target screen may be a single-sided screen; and in the case that the screen state of the first target screen is a folded state, the first target screen may include a first sub-screen and a second sub-screen. The determining module 144 may be specifically configured to determine, when the screen state of the first target screen is a folded state, at least a partial region in the first sub-screen as a second screen region, where the first screen region is at least a partial region in the second sub-screen, and an included angle between the first sub-screen and the second sub-screen is greater than a first preset angle. Or, the determining module 144 may be specifically configured to determine, when the screen state of the first target screen is the expanded state, a second target screen of the electronic device as a second screen area, where the first screen area is an area in the first target screen.
Optionally, in the embodiment of the present invention, the first target screen may be a double-sided screen, and the first target screen may include a first screen and a second screen, where the first screen and the second screen are both folding screens; and in the case where the first screen and the second screen are both folded states, the first screen may include a third sub-screen and a fourth sub-screen. The determining module 144 may be specifically configured to determine at least a partial region of the third sub-screen as the second screen region when the first screen and the second screen are both folded, where the first screen region may be at least a partial region of the fourth sub-screen, and an included angle between the third sub-screen and the fourth sub-screen is greater than a second preset angle. Or, the determining module 144 may be specifically configured to determine, when the first screen and the second screen are both in the expanded state, at least a partial region in the first screen as the second screen region, where the first screen region may be at least a partial region in the second screen.
Optionally, in this embodiment of the present invention, the first screen may include N first sub-areas, the second screen may include N second sub-areas, one first sub-area may correspond to one second sub-area, and each first sub-area displays the acquisition control. When the first screen area is an area in the first screen and the second screen area is an area in the second screen, the input of the acquisition control in one first sub-area by the user may be used to update the coverage of the second screen area in the second sub-area corresponding to the first sub-area, and N is an integer greater than 0.
Optionally, in this embodiment of the present invention, in combination with fig. 14, as shown in fig. 17, the electronic device may further include a determining module 144. A determining module 144, configured to determine, if the first input is a touch input, a first screen area based on at least one touch position of the first input; determining a first screen area based on a sliding track of the first input in the case that the first input is a sliding input; in a case where the first input is associated with a preset pattern, a target area of the preset pattern is determined based on an input parameter of the first input, and the target area of the preset pattern is determined as a first screen area.
Optionally, in this embodiment of the present invention, in combination with fig. 14, as shown in fig. 18, the electronic device may further include a control module 145. The control module 145 may be configured to control the second screen region to emit a predetermined cool color light before the capturing module 142 captures the first image through the second screen region.
The electronic device 140 provided in the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described here again to avoid repetition.
The embodiment of the invention provides electronic equipment, which can receive first input of a user; and in response to the first input, acquiring an image of a first object (an object in the target file within the coverage of the first screen area and an object within the acquisition range of the at least one camera) through the at least one camera (located in the first screen area of the electronic equipment); and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value. According to the scheme, when the electronic equipment acquires the image of the first object through the at least one camera positioned in the first screen area of the electronic equipment, on one hand, the first screen area can provide light for the at least one camera, so that the at least one camera can be ensured to successfully acquire the image; on the other hand, because the distance between the target file and the first screen area is smaller than or equal to the preset threshold value, that is, the first screen area and the target file are approximately attached to each other, the influence of the ambient light on the acquired image can be reduced, and thus the distortion of the image acquired by the electronic equipment is small. Therefore, the distortion degree of the image collected by the camera can be reduced on the basis of ensuring that the camera successfully collects the image.
Fig. 19 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 19, the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 19 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 107 may be configured to receive a first input from a user; the processor 110 may be configured to capture, in response to a first input received by the user input unit, an image of a first object through at least one camera, where the at least one camera is located in a first screen area of the electronic device, the first object is an object in the target file that is located within a coverage of the first screen area, and the first object is an object located within a capture range of the at least one camera; and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value.
It can be understood that, in the embodiment of the present invention, the receiving module 141 in the structural schematic diagrams of the electronic device (for example, fig. 14 to 18) may be implemented by the user input unit 107. The acquisition module 142 in the structural schematic diagrams of the electronic device (e.g., fig. 14-18) may be implemented by the processor 110. The display module 143 in the structural schematic diagram of the electronic device (for example, fig. 15) can be implemented by the display unit 106. The determination module 144 in the structural schematic of the electronic device (e.g., fig. 16 and 17) can be implemented by the processor 110. The control module 145 in the structural schematic diagram of the electronic device (for example, fig. 18) may be implemented by the processor 110.
The embodiment of the invention provides electronic equipment, which can receive first input of a user; and in response to the first input, acquiring an image of a first object (an object in the target file within the coverage of the first screen area and an object within the acquisition range of the at least one camera) through the at least one camera (located in the first screen area of the electronic equipment); and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value. According to the scheme, when the electronic equipment acquires the image of the first object through the at least one camera positioned in the first screen area of the electronic equipment, on one hand, the first screen area can provide light for the at least one camera, so that the at least one camera can be ensured to successfully acquire the image; on the other hand, because the distance between the target file and the first screen area is smaller than or equal to the preset threshold value, that is, the first screen area and the target file are approximately attached to each other, the influence of the ambient light on the acquired image can be reduced, and thus the distortion of the image acquired by the electronic equipment is small. Therefore, the distortion degree of the image collected by the camera can be reduced on the basis of ensuring that the camera successfully collects the image.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 19, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor 110, a memory 109, and a computer program that is stored in the memory 109 and is executable on the processor 110, and when the computer program is executed by the processor 110, the electronic device implements the processes of the foregoing method embodiment, and can achieve the same technical effects, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (20)

1. An image acquisition method applied to electronic equipment is characterized by comprising the following steps:
receiving a first input of a user;
in response to the first input, acquiring an image of a first object through at least one camera, wherein the at least one camera is located in a first screen area of the electronic equipment, the first object is an object located in a coverage area of the first screen area in a target file, and the first object is an object located in an acquisition range of the at least one camera;
the at least one camera is an off-screen camera, and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value, so that the target file is approximately attached to or completely attached to the first screen area.
2. The method of claim 1, wherein prior to receiving the first input from the user, the method further comprises:
receiving a second input of the user;
displaying an acquisition control in a second screen area of the electronic device in response to the second input;
and the first input is input of the acquisition control by a user.
3. The method of claim 2, wherein the acquisition control comprises at least one of a trigger control and a selection control;
the trigger control is used for triggering the acquisition of images, and the selection control is used for executing any one of the following items: determining a coverage range, determining the coverage range and triggering the acquisition of the image.
4. The method of claim 3, wherein the acquisition control comprises a selection control and a trigger control, and wherein the second input comprises a first sub-input and a second sub-input;
the displaying, in response to the second input, an acquisition control in a second screen area of the electronic device, comprising:
displaying the trigger control in the second screen area in response to the first sub-input;
displaying the selection control in the second screen region in response to the second sub-input.
5. The method of claim 3, wherein the acquisition control comprises a selection control or comprises a selection control and a trigger control, wherein the first input comprises a third sub-input and a fourth sub-input, wherein the third sub-input is an input to the selection control, and the fourth sub-input is an input to the selection control or the trigger control;
the acquiring, by at least one camera in response to the first input, an image of a first object, comprising:
updating a coverage of the first screen region in response to the third sub-input;
and responding to the fourth sub-input, and acquiring an image of the first object through at least one camera in the first screen area after the coverage range is updated.
6. The method of any of claims 1-5, wherein the electronic device comprises a first target screen, the first target screen being a foldable screen;
before the receiving the first input of the user, the method further comprises:
and determining a second screen area according to the screen state of the first target screen, wherein the screen state of the first target screen is a folded state or an unfolded state.
7. The method of claim 6, wherein the first target screen is a single-sided screen; under the condition that the screen state of the first target screen is a folded state, the first target screen comprises a first sub-screen and a second sub-screen;
the determining a second screen area according to the screen state of the first target screen includes:
under the condition that the screen state of the first target screen is a folded state, determining at least partial area in the first sub-screen as the second screen area, wherein the first screen area is at least partial area in the second sub-screen, and an included angle between the first sub-screen and the second sub-screen is larger than a first preset angle;
alternatively, the first and second electrodes may be,
and under the condition that the screen state of the first target screen is an expanded state, determining a second target screen of the electronic equipment as the second screen area, wherein the first screen area is an area in the first target screen.
8. The method of claim 6, wherein the first target screen is a dual-sided screen and the first target screen comprises a first screen and a second screen, both the first screen and the second screen being foldable screens;
under the condition that the first screen and the second screen are both in a folded state, the first screen comprises a third sub-screen and a fourth sub-screen;
the determining a second screen area according to the screen state of the first target screen includes:
under the condition that the first screen and the second screen are both folded, determining at least partial area in the third sub-screen as the second screen area, wherein the first screen area is at least partial area in the fourth sub-screen, and an included angle between the third sub-screen and the fourth sub-screen is larger than a second preset angle;
alternatively, the first and second electrodes may be,
determining at least a partial area in the first screen as the second screen area when the first screen and the second screen are both in an unfolded state, wherein the first screen area is at least a partial area in the second screen.
9. The method of claim 8, wherein the first screen comprises N first sub-regions, wherein the second screen comprises N second sub-regions, wherein one first sub-region corresponds to one second sub-region, and wherein each first sub-region displays an acquisition control;
and when the second screen area is an area in the first screen and the first screen area is an area in the second screen, inputting the acquisition control in one first sub-area to update the coverage of the first screen area in the second sub-area corresponding to the one first sub-area, wherein N is an integer greater than 0.
10. The method of claim 1, wherein prior to acquiring the image of the first object by the at least one camera, the method further comprises:
determining the first screen area based on at least one touch position of the first input in the case that the first input is a touch input;
determining the first screen area based on a sliding track of the first input in the case that the first input is a sliding input;
in a case where the first input is associated with a preset pattern, determining a target area of the preset pattern based on an input parameter of the first input, and determining the target area of the preset pattern as the first screen area.
11. The method of claim 1, wherein prior to acquiring the image of the first object by the at least one camera, the method further comprises:
and controlling the first screen area to emit preset cold color light.
12. An electronic device is characterized by comprising a receiving module and an acquisition module;
the receiving module is used for receiving a first input of a user;
the acquisition module is used for responding to the first input received by the receiving module, and acquiring an image of a first object through at least one camera, wherein the at least one camera is positioned in a first screen area of the electronic equipment, the first object is an object positioned in a coverage range of the first screen area in a target file, and the first object is an object positioned in an acquisition range of the at least one camera;
the at least one camera is an off-screen camera, and the distance between the target file and the first screen area is smaller than or equal to a preset threshold value, so that the target file is approximately attached to or completely attached to the first screen area.
13. The electronic device of claim 12, further comprising a display module;
the receiving module is further used for receiving a second input of the user before receiving the first input of the user;
the display module is used for responding to the second input received by the receiving module and displaying a collection control in a second screen area of the electronic equipment;
and the first input is input of the acquisition control by a user.
14. The electronic device of claim 13, wherein the acquisition control comprises at least one of a trigger control and a selection control;
the trigger control is used for triggering the acquisition of images, and the selection control is used for executing any one of the following items: determining a coverage range, determining the coverage range and triggering the acquisition of the image.
15. The electronic device of claim 14, wherein the acquisition control comprises a selection control and a trigger control, and wherein the second input comprises a first sub-input and a second sub-input;
the display module is specifically configured to display the trigger control in the second screen area in response to the first sub-input; and displaying the selection control in the second screen region in response to the second sub-input.
16. The electronic device of claim 14, wherein the acquisition control comprises a selection control or comprises a selection control and a trigger control, wherein the first input comprises a third sub-input and a fourth sub-input, wherein the third sub-input is an input to the selection control, and wherein the fourth sub-input is an input to the selection control or the trigger control;
the acquisition module is specifically configured to update the coverage area of the first screen area in response to the third sub-input; and in response to the fourth sub-input, acquiring an image of the first object through at least one camera in the first screen area after the coverage is updated.
17. The electronic device of any of claims 12-16, wherein the electronic device comprises a first target screen, the first target screen being a foldable screen; the electronic device further comprises a determination module;
the determining module is configured to determine the second screen region according to the screen state of the first target screen before the receiving module receives the first input of the user, where the screen state of the first target screen is a folded state or an unfolded state.
18. The electronic device of claim 17, wherein the first target screen is a single-sided screen; under the condition that the screen state of the first target screen is a folded state, the first target screen comprises a first sub-screen and a second sub-screen;
the determining module is specifically configured to determine at least a partial region in the first sub-screen as the second screen region when the screen state of the first target screen is a folded state, where the first screen region is at least a partial region in the second sub-screen, and an included angle between the first sub-screen and the second sub-screen is greater than a first preset angle;
alternatively, the first and second electrodes may be,
the determining module is specifically configured to determine a second target screen of the electronic device as the second screen area when the screen state of the first target screen is the expanded state, where the first screen area is an area in the first target screen.
19. The electronic device of claim 17, wherein the first target screen is a dual-sided screen and comprises a first screen and a second screen, both the first screen and the second screen being foldable screens; under the condition that the first screen and the second screen are both in a folded state, the first screen comprises a third sub-screen and a fourth sub-screen;
the determining module is specifically configured to determine at least a partial region of the third sub-screen as the second screen region when the first screen and the second screen are both folded, where the first screen region is at least a partial region of the fourth sub-screen, and an included angle between the third sub-screen and the fourth sub-screen is greater than a second preset angle;
alternatively, the first and second electrodes may be,
the determining module is specifically configured to determine at least a partial region in the first screen as the second screen region when the first screen and the second screen are both in an expanded state, where the first screen region is at least a partial region in the second screen.
20. The electronic device of claim 12, further comprising a determination module;
the determining module is configured to determine the first screen area based on at least one touch position of the first input when the first input is a touch input;
determining the first screen area based on a sliding track of the first input in the case that the first input is a sliding input; alternatively, the first and second electrodes may be,
in a case where the first input is associated with a preset pattern, determining a target area of the preset pattern based on an input parameter of the first input, and determining the target area of the preset pattern as the first screen area.
CN201910803984.XA 2019-08-28 2019-08-28 Image acquisition method and electronic equipment Active CN110602358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803984.XA CN110602358B (en) 2019-08-28 2019-08-28 Image acquisition method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803984.XA CN110602358B (en) 2019-08-28 2019-08-28 Image acquisition method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110602358A CN110602358A (en) 2019-12-20
CN110602358B true CN110602358B (en) 2021-06-04

Family

ID=68856088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803984.XA Active CN110602358B (en) 2019-08-28 2019-08-28 Image acquisition method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110602358B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114095643B (en) * 2020-08-03 2022-11-11 珠海格力电器股份有限公司 Multi-subject fusion imaging method and device, storage medium and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862445A (en) * 2006-06-16 2006-11-15 北京中星微电子有限公司 Notebook computer with calling eard and/or file scanning function
CN101261682A (en) * 2007-03-05 2008-09-10 株式会社理光 Image processing apparatus, image processing method, and computer program product
CN102201051A (en) * 2010-03-25 2011-09-28 汉王科技股份有限公司 Text excerpting device, method and system
CN104471563A (en) * 2012-06-01 2015-03-25 郑宝堧 Method for digitizing paper documents by using transparent display or device having air gesture function and beam screen function and system therefor
JP2015212892A (en) * 2014-05-02 2015-11-26 キヤノン株式会社 Image processor, information processing method and program
CN205068431U (en) * 2015-10-30 2016-03-02 深圳中物光学精密机械有限公司 Automatic focusing device
CN206696859U (en) * 2017-05-11 2017-12-01 刘丽霞 A kind of medical two-dimensional code scanning terminal of Wrist belt-type
CN109302569A (en) * 2018-09-27 2019-02-01 维沃移动通信有限公司 A kind of image imaging method and device of mobile terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7903143B2 (en) * 2008-03-13 2011-03-08 Dell Products L.P. Systems and methods for document scanning using a variable intensity display of an information handling system
US8905314B2 (en) * 2010-09-30 2014-12-09 Apple Inc. Barcode recognition using data-driven classifier
KR102059359B1 (en) * 2012-11-13 2019-12-26 삼성전자주식회사 Method of operating and manufacturing display device, and display device
JP6044426B2 (en) * 2013-04-02 2016-12-14 富士通株式会社 Information operation display system, display program, and display method
CN105550561B (en) * 2015-12-14 2019-03-15 Oppo广东移动通信有限公司 A kind of recognition methods of mobile terminal and device
CN107748615B (en) * 2017-11-07 2020-05-19 Oppo广东移动通信有限公司 Screen control method and device, storage medium and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862445A (en) * 2006-06-16 2006-11-15 北京中星微电子有限公司 Notebook computer with calling eard and/or file scanning function
CN101261682A (en) * 2007-03-05 2008-09-10 株式会社理光 Image processing apparatus, image processing method, and computer program product
CN102201051A (en) * 2010-03-25 2011-09-28 汉王科技股份有限公司 Text excerpting device, method and system
CN104471563A (en) * 2012-06-01 2015-03-25 郑宝堧 Method for digitizing paper documents by using transparent display or device having air gesture function and beam screen function and system therefor
JP2015212892A (en) * 2014-05-02 2015-11-26 キヤノン株式会社 Image processor, information processing method and program
CN205068431U (en) * 2015-10-30 2016-03-02 深圳中物光学精密机械有限公司 Automatic focusing device
CN206696859U (en) * 2017-05-11 2017-12-01 刘丽霞 A kind of medical two-dimensional code scanning terminal of Wrist belt-type
CN109302569A (en) * 2018-09-27 2019-02-01 维沃移动通信有限公司 A kind of image imaging method and device of mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
智慧供电服务柜员机系统设计与实现;周文琼 等;《软件导刊》;20190527;全文 *

Also Published As

Publication number Publication date
CN110602358A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN108471498B (en) Shooting preview method and terminal
CN110505400B (en) Preview image display adjustment method and terminal
CN109857306B (en) Screen capturing method and terminal equipment
CN111562896B (en) Screen projection method and electronic equipment
CN109032445B (en) Screen display control method and terminal equipment
WO2021012927A1 (en) Icon display method and terminal device
CN108762634B (en) Control method and terminal
CN111142991A (en) Application function page display method and electronic equipment
CN109857289B (en) Display control method and terminal equipment
CN111142723B (en) Icon moving method and electronic equipment
CN110489045B (en) Object display method and terminal equipment
CN111026316A (en) Image display method and electronic equipment
CN110968229A (en) Wallpaper setting method and electronic equipment
CN111405117B (en) Control method and electronic equipment
CN111143013A (en) Screenshot method and electronic equipment
CN110795021B (en) Information display method and device and electronic equipment
WO2021031868A1 (en) Interface display method and terminal
CN111385415B (en) Shooting method and electronic equipment
CN110830713A (en) Zooming method and electronic equipment
CN111010523A (en) Video recording method and electronic equipment
CN108804628B (en) Picture display method and terminal
CN110798621A (en) Image processing method and electronic equipment
CN110868546B (en) Shooting method and electronic equipment
CN111596990A (en) Picture display method and device
CN111190517B (en) Split screen display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant