CN117425071B - Image acquisition method, electronic equipment and storage medium - Google Patents

Image acquisition method, electronic equipment and storage medium Download PDF

Info

Publication number
CN117425071B
CN117425071B CN202311726713.1A CN202311726713A CN117425071B CN 117425071 B CN117425071 B CN 117425071B CN 202311726713 A CN202311726713 A CN 202311726713A CN 117425071 B CN117425071 B CN 117425071B
Authority
CN
China
Prior art keywords
camera
image
imaging target
acquisition
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311726713.1A
Other languages
Chinese (zh)
Other versions
CN117425071A (en
Inventor
李光源
梁吉德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311726713.1A priority Critical patent/CN117425071B/en
Publication of CN117425071A publication Critical patent/CN117425071A/en
Application granted granted Critical
Publication of CN117425071B publication Critical patent/CN117425071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present application relates to the field of electronic technologies, and in particular, to an image acquisition method, an electronic device, and a computer readable storage medium. The method is applied to electronic equipment comprising a first camera and a second camera (such as a main camera and other cameras). The method comprises the following steps: when an acquisition instruction, such as an instruction generated after a user opens a code scanning program, is detected, the first camera and the second camera are controlled to acquire, and an acquisition picture of the first camera is displayed; corresponding to the fact that no imaging target is detected in the image acquired by the first camera, such as the fact that the two-dimensional code is not detected, and the fact that the imaging target exists in the first image acquired by the second camera according to the first acquisition parameters is detected, if the two-dimensional code is detected, the first camera is controlled to acquire according to the first acquisition parameters, and a second image comprising the imaging target is obtained. The scheme of the application can fully utilize the acquisition capability of the cameras, improve the acquisition efficiency of the first camera and improve the efficiency of identifying the imaging target.

Description

Image acquisition method, electronic equipment and storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an image acquisition method, an electronic device, and a storage medium.
Background
At present, people increasingly use electronic devices (such as mobile phones, tablet computers and other terminal devices and access control devices) to collect real environments to acquire images of imaging targets (such as graphic codes, characters, animals and files) in the environments, and realize related functions based on the collected images of the imaging targets. For example, the electronic device may identify information carried in the graphic code by capturing an image of the graphic code and perform corresponding functions based on the identified information, such as accessing a web page, downloading an Application (APP), focusing on an account number, scanning for payment, and the like. For another example, the access control system may identify whether the user has rights to pass through the access control system by capturing a user profile in the environment.
However, in an actual application scenario, an initial focusing area when the camera of the electronic device is started is not generally located in an area where an imaging target is located, so that an image of the imaging target in an image acquired by the camera is not clear. The electronic device needs to continuously adjust the acquisition parameters (for example, adjust the focusing area from near to far or from far to near) to acquire the clear image of the imaging target, which results in long time for the electronic device to acquire the clear image of the imaging target and affects the efficiency of the electronic device to realize the related functions based on the acquired image.
Disclosure of Invention
The application aims to provide an image acquisition method, electronic equipment and a computer readable storage medium.
In a first aspect, the present application provides an image capturing method, applied to an electronic device, where the electronic device includes a first camera and a second camera; and the method comprises the following steps: detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera; corresponding to the fact that no imaging target is detected in the image acquired by the first camera, and detecting that the imaging target exists in the first image acquired by the second camera according to the first acquisition parameters; and controlling the first camera to acquire according to the first acquisition parameters to obtain a second image comprising the imaging target, and displaying the second image.
That is, in the embodiment of the present application, when an imaging target is not detected in an image acquired by the first camera and an image target is detected in an image acquired by the second camera, the first camera may be controlled to readjust the acquisition parameters for acquisition, so as to acquire an image including the imaging target. It can be understood that, before the first camera re-collects, the image of the imaging target is not clear enough in the collection frame displayed on the electronic device; after the first camera is re-acquired according to the first acquisition parameters of the second camera, a clearer imaging target can be displayed in an acquisition picture displayed on the electronic equipment, and the imaging target can be detected from the newly acquired image of the first camera.
According to the embodiment of the application, the second camera except the first camera can be used for collecting and identifying the target, and under the condition that the first camera does not identify and the second camera identifies the imaging target, the first camera is controlled to collect the first target image with the imaging target being clear enough according to the collecting parameters of the second camera, so that the speed of the first camera identifying the imaging target can be increased, and the target identification efficiency of the first camera is improved.
In a possible implementation of the first aspect, the first camera collects based on a first focusing strategy, and the second camera collects based on a second focusing strategy, where the first focusing strategy and the second focusing strategy are different.
In a possible implementation of the first aspect, the first focusing strategy is to collect multiple images from near to far in focus, and the second focusing strategy is to collect multiple images from far to near in focus. The imaging target is at a distance greater than a preset distance from the electronic device.
In the embodiment of the application, the first focusing strategy is to traverse the focusing point or the focusing area of the first camera from the near to the far, during which the first camera can continuously collect images based on a preset frequency, and the distance from the imaging target to the electronic equipment is greater than the preset distance, which indicates that the imaging target is at a distant view. Similarly, the second focusing strategy is to traverse the focusing point or focusing area of the second camera from far to near, during which the second camera can continuously collect images based on a preset frequency.
According to the embodiment of the application, the second camera can quickly identify the imaging target of the distant view, and the first camera can be adjusted according to the acquisition parameters of the second camera, so that the first camera can also quickly identify the imaging target of the distant view, and the target detection efficiency of the first camera under the scene (the imaging target is located at a far position) is improved.
In a possible implementation of the first aspect, detecting that an imaging target exists in a first image acquired by a second camera with a first acquisition parameter includes: and detecting a plurality of images which are acquired by the second camera and have the acquisition range corresponding to that of the first camera, and detecting a first image with an imaging target.
In the embodiment of the application, the detection range of the image acquired by the second camera and the detection range of the image acquired by the first camera are consistent, and the range of the corresponding real environment is consistent. For example, if the view angle of the image acquired by the first camera is the first angle, the image acquired by the second camera may be cropped, and the cropped image is detected; the view angle corresponding to the cut image is the first angle.
In a possible implementation manner of the first aspect, controlling the first camera to acquire according to the first acquisition parameter, obtaining a second image including the imaging target, and displaying the second image includes: determining the distance from the imaging target to the electronic equipment according to the first acquisition parameters; determining a second acquisition parameter required by the first camera to acquire a second image based on the distance from the imaging target to the electronic equipment; and controlling the first camera to acquire with the second acquisition parameters to obtain a second image comprising the imaging target.
I.e. in the embodiment of the application, the distance from the imaging target to the electronic device, i.e. the object distance corresponding to the imaging target. It will be appreciated that in the case where the imaging target includes a plurality of imaging targets, the imaging target at which the object distance corresponds is the imaging target located at the focus.
In a possible implementation manner of the first aspect, the first acquisition parameter and the second acquisition parameter are respectively focusing parameters.
That is, in the embodiment of the present application, the focusing parameter may be a motor position of the camera.
In a possible implementation of the first aspect, the second camera is controlled to stop capturing the image in response to detecting the imaging target in the image captured by the first camera.
In a possible implementation of the first aspect, the second image has a plurality of imaging targets displayed therein, and the method further includes: a first imaging target selected by a user from a plurality of imaging targets is identified.
In a possible implementation manner of the first aspect, the number of the first cameras is one, and the number of the second cameras is a plurality; detecting that an imaging target exists in a first image acquired by a second camera according to a first acquisition parameter comprises: detecting that an imaging target exists in a first image acquired by one second camera of the plurality of second cameras according to the first acquisition parameters.
In a possible implementation manner of the first aspect, the first camera is a wide-angle camera, and the second camera is an ultra-wide-angle camera.
In a possible implementation of the first aspect, the imaging target includes at least one of a graphic code, a character, a scene, an item, a text, a document.
In a second aspect, the present application provides an image capturing method, applied to an electronic device, where the electronic device includes a first camera and a second camera; and the method comprises the following steps: detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera; corresponding to the fact that no imaging target is detected in the image acquired by the first camera, and the fact that the imaging target exists in the third image acquired by the second camera is detected; and switching the displayed acquisition picture of the first camera into the acquisition picture of the second camera, wherein an imaging target is displayed in the acquisition picture of the second camera.
That is, in the embodiment of the present application, when an imaging target is not detected in an image acquired by the first camera and an image target is detected in an image acquired by the second camera, the displayed acquisition picture of the first camera may be switched to the acquisition picture of the second camera. It can be understood that, before the switching, the image of the imaging target is not clear enough in the acquisition picture displayed on the electronic device; after switching, a clearer imaging target can be displayed in an acquisition picture displayed on the electronic equipment, namely the imaging target can be detected from the newly acquired image of the second camera.
By the image acquisition method, the characteristics of different cameras can be fully utilized, and the acquisition picture of the camera with higher detection efficiency in the current scene, such as the second camera, can be displayed, so that the efficiency of detecting an imaging target can be improved.
In a possible implementation manner of the second aspect, the acquisition frame of the second camera is: and adjusting the obtained picture corresponding to the acquisition range of the first camera according to the image acquired by the second camera.
In the embodiment of the application, the range of the corresponding real environment of the displayed acquisition picture of the second camera and the displayed acquisition picture of the first camera before switching are consistent. For example, if the view angle of the image acquired by the first camera is the first angle, the image acquired by the second camera may be cut, and the cut image is sent and displayed; the view angle corresponding to the cut image is the first angle.
In a possible implementation of the second aspect, the first camera collects based on a first focusing strategy, and the second camera collects based on a second focusing strategy, where the first focusing strategy and the second focusing strategy are different.
In a possible implementation of the second aspect, the first focusing strategy is to collect multiple images from near to far in focus, and the second focusing strategy is to collect multiple images from far to near in focus.
That is, in the embodiment of the present application, the first focusing strategy is to traverse the focusing point or the focusing area of the first camera from the near to the far, during which the first camera may continuously collect images based on a preset frequency. Similarly, the second focusing strategy is to traverse the focusing point or focusing area of the second camera from far to near, during which the second camera can continuously collect images based on a preset frequency.
In a possible implementation of the second aspect, detecting that an imaging target exists in a first image acquired by a second camera with a first acquisition parameter includes: and detecting a plurality of images which are acquired by the second camera and have the acquisition range corresponding to that of the first camera, and detecting a first image with an imaging target.
In the embodiment of the application, the detection range of the image acquired by the second camera and the detection range of the image acquired by the first camera are consistent, and the range of the corresponding real environment is consistent. For example, if the view angle of the image acquired by the first camera is the first angle, the image acquired by the second camera may be cropped, and the cropped image is detected; the view angle corresponding to the cut image is the first angle.
In one possible implementation of the second aspect, the number of imaging targets is a plurality, and the imaging targets are displayed in the acquisition frame of the second camera, including: displaying a plurality of imaging targets in an acquisition picture of the second camera; a first imaging target selected by a user from a plurality of imaging targets is identified.
In a possible implementation manner of the second aspect, the number of the first cameras is one, and the number of the second cameras is a plurality; detecting that an imaging target exists in a third image acquired by the second camera comprises the following steps: detecting that an imaging target exists in a third image acquired by a target camera in the plurality of second cameras; switching to the collection picture of second camera includes: and switching to an acquisition picture of the target camera.
In a possible implementation manner of the second aspect, the first camera is a wide-angle camera, and the second camera is an ultra-wide-angle camera.
In one possible implementation of the second aspect, the imaging target includes at least one of a graphic code, a character, a scene, an item, a text, and a document.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory for storing instructions for execution by one or more processors of the electronic device, and a processor that when executed by the processor causes the electronic device to perform the method of the first or second aspect described above.
In a fourth aspect, an embodiment of the present application provides a storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the method of the first aspect or the second aspect described above.
In a fifth aspect, embodiments of the present application provide a chip comprising programmable logic circuits and/or program instructions, which when executed, implement the method of the first or second aspect.
Drawings
FIG. 1A shows a schematic view of a first type of imaging screen in accordance with the present application;
FIG. 1B shows a schematic diagram of a second type of imaging screen according to the present application;
FIG. 2A illustrates a first interface schematic of an electronic device 100, in accordance with an embodiment of the present application;
FIG. 2B illustrates a second interface schematic of the electronic device 100, in accordance with an embodiment of the present application;
FIG. 3 is a first flow chart of an image acquisition method according to an embodiment of the application;
FIG. 4 shows a second flow diagram of an image acquisition method according to an embodiment of the application;
FIG. 5A shows a schematic diagram of a first type of imaging screen, according to an embodiment of the present application;
FIG. 5B is a schematic diagram of a second type of imaging screen according to an embodiment of the present application;
FIG. 6A is a schematic diagram of a third imaging screen according to an embodiment of the present application;
FIG. 6B is a schematic diagram of a fourth imaging screen according to an embodiment of the present application;
FIG. 7 shows a third flow diagram of an image acquisition method according to an embodiment of the application;
FIG. 8 illustrates a hardware framework diagram of an electronic device 100, according to an embodiment of the application;
fig. 9 shows a software architecture block diagram of an electronic device 100 according to an embodiment of the application.
Detailed Description
Illustrative embodiments of the application include, but are not limited to, an image acquisition method, an electronic device, and a storage medium.
It should be appreciated that the imaging target in embodiments of the present application may be a graphic code, character, scene, animal, file, text, face, etc. For convenience of description, in the following embodiments, the technical solution of the present application is described by taking an imaging target as an example of a graphic code.
Generally, an electronic device acquires an image of an imaging target through a single camera to identify a graphic code according to an image acquired by the camera. However, the initial focus area of the camera is not at the graphic code, and then the camera needs to continuously adjust the focus area until the camera focuses at the graphic code, or the graphic code is located within the depth of field (i.e., clear area) of the camera, so that the imaging of the graphic code is not clear enough. That is, a single camera takes a lot of time to obtain a clear image of the graphic code, and before that, the terminal device cannot recognize the graphic code, resulting in low efficiency of recognizing the graphic code.
It can be understood that the imaging picture acquired by the camera is displayed on the acquisition interface of the electronic equipment, the electronic equipment can control the camera to perform automatic focusing, namely, the motor or the motor is controlled to control the movement of the focusing lens group in the camera, the focusing point in the imaging picture is changed or the magnification of imaging is changed, and the picture near the focusing point is clear. For example, the focus point is changed such that the focus point is located at the graphic code, or the image is enlarged based on the optical zoom, thereby making the imaging of the graphic code clear.
For example, in some embodiments, the electronic device may focus based on a particular focus strategy that may not recognize the graphics code quickly in some scenarios. For example, when the focusing strategy adopts a near-to-far strategy (i.e., focusing on a near object, gradually zooming out a focusing area or focusing point, focusing on a further object), if the graphic code is at a distant view in an acquisition picture (i.e., an acquisition imaging picture displayed on the electronic device when the user performs a code scanning operation), a long-time focusing process is required, the camera can focus on the graphic code or near the graphic code, so that the graphic code is located within a depth of field range (clear range) to obtain a clear picture of the graphic code.
As an example, the electronic device may be the mobile terminal 10, and the real-world environment in front of the camera of the mobile terminal 10 is: an environment near a parking lot barrier. Taking fig. 1A and 1B as an example, fig. 1A and 1B respectively illustrate imaging pictures of a code scanning interface, wherein the imaging pictures comprise a steering wheel 01 in a vehicle, a two-dimensional code 02 on a stand plate outside a windshield, and a two-dimensional code 03 on a wall surface behind the stand plate.
For example, the mobile terminal 10 focuses based on a near-to-far policy. First, as shown in fig. 1A, the focal point of the camera is located near, for example, at a steering wheel 01 in a vehicle, and at this time, imaging of the steering wheel 01 is clear, and imaging of the two-dimensional code 02 and the two-dimensional code 03 is blurred. Then, the focusing point moves from near to far; i.e. the image distance of the lens is shortened and the object distance is lengthened. After a period of time, the focusing point is moved to the two-dimensional code 02 on the standing board, and at this time, the imaging of the two-dimensional code 02 is changed from blurring to sharpness, so that the mobile terminal 10 can recognize the two-dimensional code 02, and a graphic code selection frame is displayed around the two-dimensional code 02. At this time, the two-dimensional code 03 is located outside the depth range, so that the image is blurred. In the above-described procedure, the mobile terminal 10 consumes a long time to detect the two-dimensional code 02, and the efficiency of detecting the two-dimensional code is low.
Similarly, if the focusing strategy adopted by the electronic device is a far-to-near strategy, under the condition that the position of the graphic code is close to the distance of the electronic device, the electronic device can acquire the clear image of the graphic code near because the camera is focused at the far position first and then gradually focused towards the position close to the graphic code.
That is, regardless of whether a near-to-far or near-to-far focus strategy is employed, in some scenarios, it may take a long time to acquire a clear graphic code by acquiring an image of the graphic code with only one camera.
It is understood that a plurality of cameras are disposed on a general electronic device, for example, a main camera (wide-angle camera), an ultra-wide-angle camera, a macro camera, a fixed-focus camera, and the like are disposed on a mobile phone. Therefore, in order to fully utilize the acquisition capability of the cameras, in the image acquisition method provided by the embodiment of the application, the electronic equipment can acquire images by adopting the cameras in parallel and respectively detect whether imaging targets exist in the images acquired by the cameras. If an imaging target is detected in a target image acquired by a target camera, a detection result based on the target camera may be displayed. For example, if no imaging target is detected in the image acquired by the main camera, but detected in the image acquired by the macro camera, the imaging screen displayed by the electronic device may be switched to the acquisition screen of the macro camera, and the detected imaging target is displayed therein. Therefore, the characteristics of different cameras can be fully utilized, and the acquired pictures of the cameras with higher detection efficiency in the current scene are sent and displayed, for example, the target camera, so that the efficiency of detecting an imaging target can be improved.
In some alternative embodiments, the electronic device may employ the first camera and the at least one second camera in parallel to collect images respectively using different focusing strategies, and detect whether an imaging target exists in the images collected by the respective cameras respectively. If the electronic equipment detects an imaging target in the image acquired by the first camera, the electronic equipment controls the first camera to acquire the image of the imaging target; if the electronic device detects the imaging target in the image acquired by the third camera of the at least one second camera, the electronic device may determine focusing information (e.g., a distance from the imaging target to the electronic device) corresponding to the imaging target based on current acquisition parameters (e.g., focusing parameters, etc.) of the third camera, and control the first camera to acquire the image of the imaging target based on the determined focusing information.
After the electronic equipment acquires the image of the imaging target, the related functions can be realized. For example, when the imaging target is a two-dimensional code (or a bar code), the electronic device can identify the acquired image containing the two-dimensional code, so that functions of payment, webpage access and the like are realized, for example, in a parking lot scene shown in 1A, the electronic device can sweep a two-dimensional code picture beside a road gate, and a payment function of parking is realized. When the imaging target is a file, the electronic equipment can identify the acquired image of the file, so that the functions of text identification, file scanning and the like are realized. For another example, when the imaging target is a document, the electronic device may correct the image of the document after acquiring the image of the document; for another example, when the imaging target is a text, the electronic device can translate the text after acquiring an image of the text; also for example, when the imaging target is a face, the electronic device may identify a feature of the face after acquiring the face image to determine whether a user corresponding to the face has access rights of a certain system (e.g., an access control system).
In some embodiments, the electronic device may display the image captured by the first camera on the screen of the electronic device in real time, i.e., the user clicks on the capture interface triggered by the camera control, without displaying the image captured by the second camera. And performing target detection based on images acquired by at least two cameras (including a first camera and at least one second camera) of the electronic equipment, and displaying imaging pictures of the first camera on an acquisition interface in real time. If the first camera recognizes the imaging target, the electronic device can control the first camera to acquire an image of the imaging target and display the image on the acquisition interface. If the third camera of the at least one second camera detects the imaging target before the first camera, the electronic device can determine focusing information of the imaging target adopted by the third camera, determine the distance between the imaging target and the electronic device based on the focusing information, and then control the first camera to refocus based on the distance so as to identify the imaging target of the target, acquire an image of the imaging target and display the acquired image on an acquisition interface. For example, if the first camera does not recognize the far imaging target based on the near-far focusing strategy, and the second camera immediately recognizes the far imaging target based on the far-near focusing strategy, the target position of the imaging target may be determined based on the focusing parameters of the second camera, and the first camera may be controlled to focus far according to the target position, so as to recognize the far imaging target. By the method, at least two cameras can be used for identifying the imaging target, the limitation of a single camera on a focusing strategy is avoided, and the identification efficiency is improved.
According to some embodiments, after the electronic device detects the imaging target based on the first acquired image acquired by the second camera, the distance from the imaging target to the electronic device may be determined based on the focusing parameter of the second acquired image, i.e. the motor position when the second camera acquires the first acquired image. And then the electronic equipment can determine the motor position of the first camera based on the distance, so that the first camera can move the motor of the first camera to the determined motor position so as to focus on an imaging target detected by the second camera, obtain clear target imaging and display the clear target imaging on an acquisition interface.
According to some embodiments, the first camera may employ an auto-focus mode, and the focus strategy is from the near to the far; the second camera can adopt an automatic focusing mode, and the focusing strategy is from far to near. It can be understood that the focusing point of the first camera can be traversed from macro to infinity, and the image data is output to the target detection model for detection every N (N is an integer greater than 0) frames; the focusing point of the second camera can be traversed from infinity to a micro-distance, and the image data is output to the target detection model for detection every N frames.
According to some embodiments, the first camera's focus strategy is from near to far, the second camera includes a plurality, wherein the focus strategy of each of the plurality of second cameras may be different, e.g., the first second camera's focus strategy is from far to near, the second camera's focus strategy is from near to far, the other second cameras' focus strategies are from image center to near, from image center to far, etc. The embodiment of the application does not limit the focusing strategy of each second camera.
According to some embodiments, the first camera may be a primary camera and the second camera may be a wide angle camera. If the imaging target or other imaging targets have been identified in the image acquired by the first camera before the imaging target is identified in the image acquired by the second camera, the second camera does not need to perform any operation, that is, the detection process is completed based on the image acquired by the first camera. In this embodiment, the imaging target is detected in the image acquired by the first camera, which means that the detection efficiency based on the first camera is higher in the scene, so that the first camera can be preferentially ensured to complete the target detection.
The interface of the electronic device 100 in the embodiment of the present application is described below with reference to fig. 2A-2B.
As shown in fig. 2A, after the user performs the detection operation, the electronic device 100 may display an interface of the code scanning application program as shown in fig. 2A. The detection operation may be a click code scanning button or code scanning control, and the detection operation is used to trigger a code scanning application program of the electronic device 100, as shown in fig. 2A, an interface of the electronic device 100 displays an object detection program, where the interface includes an imaging frame 20, and a detection frame 30, that is, the electronic device 100 may perform imaging object detection based on an image displayed in the imaging frame 20.
It can be understood that the imaging frame 20 displays an image acquired by the first camera on the surrounding environment of the electronic device 100, and the display frame of the imaging frame 20 corresponds to the acquisition range of the first camera. That is, if the user moves the electronic device 100 or changes the angle of the electronic device 100, the imaging screen 20 changes accordingly.
In the embodiment of the present application, after the user performs the detection operation, the second camera may also acquire the surrounding environment of the electronic device 100 to obtain the image, however, the image acquired by the second camera is only used for detecting the imaging target and is not displayed on the interface of the electronic device 100.
It will be appreciated that, after the electronic device 100 detects the imaging target based on the image displayed in the imaging screen 20, the image displayed in the imaging screen 20 may be automatically controlled to zoom in and move such that the detected image of the imaging target is located in the detection frame 30.
It should be noted that, in the above embodiment, only one imaging target is included in the image acquired by the first camera, and in other alternative embodiments, a plurality of imaging targets may be included in the image acquired by the first camera, and accordingly, a plurality of detection frames 30 may be displayed on the interface of the electronic device 100, which respectively correspond to the plurality of imaging targets.
As shown in fig. 2B, after the user performs the detection operation, the electronic device 100 may display an interface of the object detection application program as shown in fig. 2B. The detection operation may be clicking on a target detection button or a target detection control, where the detection operation is used to trigger a target detection application of the electronic device 100.
As shown in fig. 2B, a detection category button 40 is displayed on the interface of the electronic device 100, and the corresponding detection categories include identification, document scanning, identification, translation, code scanning, and shopping; the imaging screen 20 shown in fig. 2B includes a person 50, an object 60, and a character 70. For example, the person 50 shown in fig. 2B is the imaging target corresponding to the user clicking the person recognition button in the detection category button 40; the object 60 shown in fig. 2B is an imaging target corresponding to the user clicking the recognition button in the detection category button 40, and the text 70 shown in fig. 2B is an imaging target corresponding to the user clicking the translation button in the detection category button 40.
It should be noted that the detection category in the above embodiment is only an example, and in other alternative embodiments, the detection category button 40 may also include buttons of other detection categories.
The image capturing method according to the embodiment of the present application will be described below by taking the electronic device 100 including the first camera and the second camera as an example. It is understood that the electronic device 100 may include a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an Augmented Reality (AR)/Virtual Reality (VR) device, a smart wearable device, or the like type of electronic device 100. Alternatively, the operating system running on electronic device 100 may include, but is not limited to, an android system, an IOS system, linux, windows, and the like.
Illustratively, fig. 3 shows a first flow diagram of an image acquisition method, in accordance with some embodiments of the present application. The execution subject of this flow is the electronic device 100. Referring to fig. 3, an exemplary flow of an image acquisition method according to an embodiment of the present application includes:
S301: and detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera.
It will be appreciated that the acquisition instructions represent the user intent to: imaging targets in the surrounding environment of the electronic device 100 are acquired and detected by using the camera function of the electronic device 100.
In an alternative embodiment, the electronic device 100 may detect an instruction to acquire an imaging target (i.e., a graphic code) upon detecting a user initiated code scanning procedure. At this time, the interface of the electronic device 100 is shown in fig. 2A.
In an alternative embodiment, the electronic device 100 may detect the gather instruction upon detecting that the user has launched a target detection application (e.g., a code scanner) and clicked on a detection button (e.g., a person, thing, code scanner, translation, etc. button).
Illustratively, as shown in FIG. 2B, a detection category button 40 is displayed on the interface of the electronic device 100, and the corresponding detection categories include identification, document scanning, person identification, translation, code scanning, shopping; the imaging screen 20 shown in fig. 2B includes a person 50, an object 60, and a character 70. For example, the person 50 shown in fig. 2B is the imaging target corresponding to the user clicking the person recognition button in the detection category button 40; the object 60 shown in fig. 2B is an imaging target corresponding to the user clicking the recognition button in the detection category button 40, and the text 70 shown in fig. 2B is an imaging target corresponding to the user clicking the translation button in the detection category button 40.
In other alternative embodiments, the user may also perform the detection operation through other applications, such as chat applications, shopping applications, video applications, and the like.
According to some embodiments, the first camera adopts an automatic focusing mode, and the focusing strategy is from the near to the far; the second camera adopts an automatic focusing mode, and the focusing strategy is from far to near. It can be appreciated that the focal point of the first camera may be traversed from macro to infinity; the focal point of the second camera may be traversed from infinity to a macro.
Optionally, the second cameras may include a plurality, where the focus strategy of each of the plurality of second cameras may be different, e.g., the focus strategy of a first second camera is from far to near, the focus strategy of a second camera is from near to far, the focus strategies of other second cameras are from near to near, from near to far, etc. The embodiment of the application does not limit the focusing strategy of each second camera.
In an alternative embodiment, the first camera is a main camera, i.e. a main camera, of the electronic device 100 and the second camera is a non-main camera, i.e. a non-main camera, of the electronic device 100.
In some embodiments, the main camera may be a wide-angle camera, with the highest pixel in all cameras of the electronic device 100.
Alternatively, the non-main camera may be an ultra-wide angle camera, and may have a larger depth of field than the main camera, i.e. a larger range of clear images in imaging.
In other embodiments, the second camera may also be a macro camera, etc., and the embodiments of the present application do not limit the types of the first camera and the second camera.
It will be appreciated that the first camera may acquire the first acquired image at a fixed frequency and transmit it to the display. The display instruction is displayed on a display interface of the electronic device 100, for example, on the imaging screen 20 shown in fig. 2A.
In an alternative embodiment, the electronic device may control the first camera to acquire the first acquired image and control the second camera to acquire the second acquired image when the acquisition command is detected. The acquisition ranges of the first camera and the second camera can be the same range or different ranges. The first collected image and the part corresponding to the collection range of the first camera in the second collected image, namely the part where the second collected image and the first collected image are overlapped, can be detected respectively, and whether an imaging target exists in the first collected image and the second collected image is detected respectively.
As an example, every N frames, the first acquired image or the second acquired image of the N frames may be detected.
S302: and determining the first acquisition parameters corresponding to the fact that no imaging target is detected in the image acquired by the first camera and the fact that the imaging target exists in the first image acquired by the second camera according to the first acquisition parameters is detected.
It is understood that the first image is a second acquired image of the detected imaging subject.
It will be appreciated that, as an example, the imaging target is located at a distant view in the imaging frame, and the focusing point of the first camera traverses from near to far, and the focusing point of the second camera traverses from far to near, so that the second camera focuses to the imaging target before the first camera, that is, the condition is satisfied: no imaging target is detected in the image acquired by the first camera and the imaging target is detected in the first image currently acquired by the second camera. For example, when the imaging target is located in the near view in the imaging frame, and the focusing point of the first camera traverses from far to near, and the focusing point of the second camera traverses from near to far, the second camera focuses on the imaging target before the first camera, so that the above condition is satisfied.
In an alternative embodiment, the first camera may acquire the first acquired images of multiple frames at a fixed frequency, and the N first acquired images may be input into the detection model for detection every N first acquired images, so as to detect whether an imaging target exists in the N first acquired images. Wherein N is an integer greater than or equal to 1.
It can be appreciated that in the process of detection, the first camera can adjust the focusing area in real time, collect the first collected image based on the continuously adjusted focusing area, and detect based on the first collected image.
Optionally, the acquired first acquired image may be scaled, so as to obtain a scaled first acquired image, and detection is performed based on the scaled first acquired image. It will be appreciated that the first acquired image may be scaled up, detection of the imaging target may be performed based on the scaled up first acquired image until the imaging target is detected or a maximum magnification threshold is reached, otherwise the step of scaling up the first acquired image, and detection of the imaging target based on the scaled up first acquired image may be cycled. For example, the magnification is sequentially 2 times, 3 times, and 10 times, and the embodiment of the present application does not limit the magnification.
In an alternative embodiment, the imaging target may be detected in a coincident image of a second acquired image acquired by a second camera. It will be appreciated that the coincident image is part of the second acquired image and the acquisition range of the coincident image belongs to the acquisition range of the first acquired image, that is, only the part of the second acquired image that coincides with the first acquired image needs to be detected. Optionally, in the case that the acquisition range of the second camera includes the acquisition range of the first camera, the acquisition range of the coincident image may be identical to the acquisition range of the first acquisition image; in case the acquisition range of the second camera does not completely comprise the acquisition range of the first camera, the coincident image may also be part of the acquisition range of the first acquisition image.
Taking fig. 2A as an example, the frame range of the imaging frame 20 is the acquisition range of the first camera, and when the acquisition range of the second camera includes the acquisition range of the first camera, the range of the superimposed image is the range of the imaging frame 20.
As an example, if the field angle of the image acquired by the first camera is the first angle, the second acquired image acquired by the second camera may be cropped, and the image obtained by cropping may be detected; the cut image is the overlapping image, and the corresponding field angle is the first angle.
Optionally, the second camera may collect the second collected images of multiple frames according to the fixed frequency, and the N overlapping images may be input into the detection model for detection, so as to detect whether an imaging target exists in the N overlapping images. Wherein N is an integer greater than or equal to 1.
It will be appreciated that the first acquired image is the second acquired image in which the imaging subject is detected.
S303: and controlling the first camera to acquire according to the first acquisition parameters to obtain a second image comprising the imaging target, and displaying the second image.
It can be appreciated that the first camera performs acquisition based on the second acquisition parameters, so as to obtain a first target image, wherein the imaging target is located in the focusing area range of the first target image. That is, the first camera collects based on the second collection parameter, and can focus on the imaging target, so as to obtain a first target image with a sufficiently clear image of the imaging target.
In an alternative embodiment, the first acquisition parameter and the second acquisition parameter may be focusing parameters, respectively. As one example, a distance of the imaging target to the electronic device may be determined from the first acquisition parameter; determining a second acquisition parameter required by the first camera to acquire a second image based on the distance from the imaging target to the electronic equipment; and controlling the first camera to acquire with the second acquisition parameters to obtain a second image comprising the imaging target.
Optionally, the first acquisition parameter may specifically be a motor position when the second camera acquires the first acquisition image, and the second acquisition parameter may be a motor position required by the first camera to acquire the imaging target. It will be appreciated that the motor position determines the camera's image distance, object distance, i.e. the position of the focal area or spot of the camera.
Optionally, based on the motor position when the second camera collects the first collected image, the distance between the imaging target and the electronic device 100 in the real environment, that is, the object distance corresponding to the imaging target, may be calculated. Then, based on the object distance corresponding to the imaging target, the motor position required by the first camera to acquire the imaging target can be calculated
In an alternative embodiment, the distance between the imaging target and the electronic device can be obtained according to the motor position of the second camera and the parameters of the second camera, and further, the second acquisition parameters can be obtained based on the distance and the parameters of the first camera. Optionally, the second acquisition parameter may be a motor position of the first camera, and when the motor of the first camera is at the motor position, the imaging target may be located in a focusing area range of the first camera, so as to obtain a clear imaging of the imaging target through the first camera.
Alternatively, the first target image and the identification of the imaging target in the first target image may be displayed on the screen of the electronic device 100, for example, a dot identification is displayed in the image center of the imaging target.
In an alternative embodiment, multiple imaging targets may be detected based on the first acquired image acquired by the second camera. In this embodiment, a detection mark, such as a dot mark, a detection frame, or the like, may be displayed at the image of each imaging target in the interface of the electronic device 100, such as the imaging screen 20 of fig. 2A.
It will be appreciated that in response to a user clicking on an identification of an imaging target, the electronic device 100 may perform a resolution process on the imaging target. For example, when the imaging target is a two-dimensional code, the electronic device 100 may analyze the two-dimensional code to implement a corresponding function of the two-dimensional code, such as accessing a web page, paying, and the like.
In the above embodiment, it may be understood that if the imaging target is detected in the image acquired by the second camera, it indicates that the target detection efficiency based on the second camera is higher in the current scene, and in this case, the detection result based on the second camera may be preferentially presented.
In an alternative embodiment, the electronic device may control the second camera to stop capturing images in response to detecting an imaging target in the second captured image captured by the first camera and not detecting an imaging target in the image captured by the second camera. It can be understood that if an imaging target is detected in the image acquired by the first camera, it indicates that in the current scene, the target detection efficiency based on the first camera is higher, in this case, the detection result based on the first camera may be preferentially presented, that is, the first camera is controlled to perform focusing traversal according to the focusing strategy of the first camera, and the image acquired by the first camera is displayed in real time, so that the second camera does not interfere with the imaging target.
In an alternative example, the electronic device may display a detection frame of the second number of imaging targets corresponding to the first number of imaging targets detected in the second captured image captured by the first camera, the second number of imaging targets detected in the first captured image captured by the second camera, and the second number being greater than the first number. Then, after detecting that the user selects an imaging target to be acquired from the detection frames of the second number of imaging targets, if the imaging target selected by the user is the imaging target detected in the second acquired image, the electronic device can control the first camera to acquire a target image of the imaging target; if the imaging target selected by the user is an imaging target detected in the first acquired image and the imaging target cannot be detected in the second acquired image, the electronic device may obtain acquisition parameters required by the first camera to acquire the imaging target based on acquisition parameters used when the second camera acquires the second acquired image, and acquire a target image of the imaging target based on the determined acquisition parameters.
According to the image acquisition method, the second camera except the first camera can be used for target identification, and under the condition that the first camera does not identify an imaging target and the second camera identifies the imaging target, the first camera is controlled to acquire a first target image with enough definition of the imaging target according to the acquisition parameters of the second camera, so that the speed of identifying the imaging target by the first camera can be increased, and the target identification efficiency of the first camera is improved.
In some embodiments, the acquisition ranges of the first camera and the second camera of the electronic device 100 are different, for example, the acquisition range of the second camera partially coincides with the acquisition range of the first camera (hereinafter, the acquisition range of the second camera partially coincides with the acquisition range of the first camera is referred to as a coincidence acquisition range). In this case, when detecting the imaging target based on the image acquired by the second camera, the electronic device 100 may detect only in the image corresponding to the overlapping range (hereinafter referred to as overlapping image) among the images acquired by the second camera, without detecting the images of other portions in the image acquired by the second camera, which is advantageous in further improving the efficiency of acquiring the clear image of the imaging target.
Another exemplary flow of an image acquisition method provided by an embodiment of the present application is described below based on fig. 4.
Illustratively, fig. 4 shows a second flow diagram of an image acquisition method, in accordance with some embodiments of the present application. The execution subject of this flow is the electronic device 100. Referring to fig. 4, an exemplary flow of an image acquisition method according to an embodiment of the present application includes:
S401: and detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera.
The content of step S401 may be referred to the above description of step S301 in fig. 3, and will not be repeated here.
S402: and determining the first acquisition parameters corresponding to the fact that no imaging target is detected in the image acquired by the first camera and the fact that the imaging target exists in the first image acquired by the second camera according to the first acquisition parameters is detected.
The content of step S402 may be referred to the above description of step S302 in fig. 3, and will not be repeated here.
S403: and switching the displayed acquisition picture of the first camera into the acquisition picture of the second camera.
In other alternative embodiments, the displayed acquisition frame of the first camera may be switched to the acquisition frame of the second camera, where the imaging target is displayed in the acquisition frame of the second camera.
Optionally, the acquisition picture of the second camera is: and adjusting the obtained picture corresponding to the acquisition range of the first camera according to the image acquired by the second camera. That is, the range of the displayed captured image of the second camera coincides with the capture range of the first camera.
As an example, the displayed collection screen of the second camera and the displayed collection screen of the first camera before switching are consistent in the range of the corresponding real environment. For example, if the view angle of the image acquired by the first camera is the first angle, the image acquired by the second camera may be cut, and the cut image is sent and displayed; the view angle corresponding to the cut image is the first angle.
In an alternative embodiment, the number of imaging targets detected in the first image acquired by the second camera is a plurality. In this embodiment, a plurality of imaging targets may be displayed in the acquisition screen of the second camera. For example, two-dimensional codes 002 and 003 are displayed on the interface shown in fig. 5B. The first imaging target selected by the user from the plurality of imaging targets may then be identified. For example, the user selects a two-dimensional code from the two-dimensional codes 002 and 003 in fig. 5B to identify the two-dimensional code, so as to realize functions such as web page access and payment.
Optionally, the number of the first cameras is one, and the number of the second cameras is a plurality. In this embodiment, if it is detected that an imaging target exists in a third image acquired by a target camera of the plurality of second cameras, the method switches to an acquisition frame of the target camera.
According to the embodiment of the application, the characteristics of different cameras can be fully utilized, and the acquired images of the cameras with higher detection efficiency under the current scene, such as the second camera, can be displayed, so that the efficiency of detecting an imaging target can be improved.
Referring to fig. 5A-5B, the method of the embodiment of the present application is described below by taking an example that the real environment in front of the first camera and the second camera of the electronic device 100 is an environment near the parking lot barrier and the imaging target is a two-dimensional code.
Fig. 5A and 5B illustrate imaging images of the electronic device 100, including a steering wheel 001 in a vehicle, a two-dimensional code 002 on a standing board outside a windshield, and a two-dimensional code 003 on a wall surface behind the standing board.
It can be appreciated that the first camera focuses based on the near-to-far strategy and the second camera focuses based on the far-to-near strategy.
First, as shown in fig. 5A, the focal point of the first camera is located near, for example, at the steering wheel 001 in the vehicle, and at this time, the imaging of the steering wheel 001 is clear, and the imaging of the two-dimensional code 002 and the two-dimensional code 003 are blurred.
At this time, the focusing point of the second camera is located at a distance, that is, the two-dimensional code 003 can be detected in the first acquired image acquired by the second camera. In addition, the second camera is an ultra-wide angle camera and has a larger depth of field range, so that the image of the two-dimensional code 002 in the first acquired image is also clear. So the second camera can detect two-dimensional code 002, two-dimensional code 003.
Then, the acquisition parameters of the two-dimensional codes 002 and 003 can be calculated based on the acquisition parameters of the second camera when the first acquisition image is acquired, and the first camera is controlled to acquire based on the calculated acquisition parameters, so that a first target image as shown in fig. 5B is obtained.
As shown in fig. 5B, the images of the two-dimensional codes 002 and 003 are clear, and a detection frame is also displayed around the images of the two-dimensional codes 002 and 003, so that a user can select a target two-dimensional code from the two-dimensional codes 002 and 003 to identify by clicking an area in the detection frame. Alternatively, the detection frame may be replaced by other identification marks, such as dot marks, arrow marks, and the like.
It can be understood that the focusing point of the second camera is located far away, for example, when the focusing point is close to the two-dimensional code 003, the two-dimensional code 002 and the two-dimensional code 003 can be identified from the acquired first acquired image, then the first camera can perform focusing adjustment and two-dimensional code identification according to the shooting parameters, and the identification efficiency of the two-dimensional code can be greatly improved. And when the first camera collects the first target image, the focusing point is also located at a far position, namely at the two-dimensional code 003 or near the two-dimensional code 003, and at the moment, because the focusing area is far away, the depth of field range of the first camera is also large, so that the imaging of the two-dimensional code 002 is also clear, and the two-dimensional codes at a plurality of distant scenes can be identified simultaneously.
Referring to fig. 6A-6B, the method of the embodiment of the present application is described below by taking a road as an example where a real environment in front of a first camera and a second camera of an electronic device and an imaging target is a person.
Fig. 6A and 6B illustrate camera application interfaces of the electronic device 100, including the pedestrian 04 and the pedestrian 05, respectively. The user can set the acquisition mode to be a person identification mode based on the camera application program interface so as to identify people in the acquisition picture.
It can be appreciated that the first camera focuses based on the near-to-far strategy and the second camera focuses based on the far-to-near strategy.
First, as shown in fig. 6A, the focal point of the first camera is located near, i.e. at pedestrian 04, and at this time, the imaging of pedestrian 04 is clear, i.e. pedestrian 04 can be detected immediately in the first collected image collected by the first camera.
Optionally, the second camera does not recognize pedestrian 05 until the first camera recognizes pedestrian 04. At this time, since the first camera has already recognized the person, the second camera does not need to acquire the second acquired image again, and does not need to perform imaging target recognition on the second acquired image.
In an alternative embodiment, if the user wants to identify the pedestrian 05, the user may manually click on the pedestrian 05 based on the camera application interface shown in fig. 6B, so that the first camera focuses on the pedestrian 05 at the distant view, so that the electronic device 100 identifies the pedestrian 05 based on the first captured image.
An exemplary flow of an embodiment of the present application is further described below in conjunction with fig. 7, with reference to fig. 7.
First, the code scanning procedure 701 is started, the electronic device 100 starts to collect images through the main camera and the ultra-wide angle camera, that is, the bottom layer is notified to start the main camera and the ultra-wide angle dual-channel outflow 702, and the main channel sends and enters the code scanning procedure 703. It is understood that the send display is displayed on the screen of the electronic device 100. The ultra-wide-angle stream is not displayed, and the two-dimensional code detection 704 is performed on the image within the range of the field of view and FOV common to the main camera.
If the ultra-wide angle camera is automatically focused, traversing the focusing point from infinity to a micro distance; if the ultra-wide angle camera is in fixed focusing, the focusing point is not changed, and then data of every N frames are sent to a two-dimensional code detection algorithm for detection 705. For example, the image is obtained by traversing the focusing point from infinity to a macro, and the image is detected by enlarging the image by X times. After the image is enlarged to X times 706, it is determined whether two-dimensional code information is detected in the ultra-wide angle image 707, if not, the main road code scanning process 708 is continuously completed, and if yes, the motor position corresponding to the image in which the two-dimensional code is detected is recorded 709.
At this time, it may be determined whether the two-dimensional code 710 has been detected by the main camera at the time, if so, the two-dimensional code detection flow 718 is completed, and if not, the motor position is converted to the object distance and transmitted to the main camera 712, and the main camera converts the object distance to the target position of the main camera motor, and controls the motor to move from the current focus position to the target position 713. Then, it is determined whether the number of two-dimensional codes detected at the ultra-wide angle is greater than 1 714. If yes, displaying N optional two-dimensional codes 715, clicking and selecting the two-dimensional code 716 by the user, namely determining the two-dimensional code to be scanned according to the selection of the user, and then completing the two-dimensional code detection flow 718. If not, 1 two-dimensional code 717 is displayed, and then the two-dimensional code detection flow 718 is completed.
Fig. 8 shows a hardware configuration diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITALSIGNAL PROCESSOR, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The processor 110 may also be provided with a memory, which is used for storing instructions and data corresponding to the image acquisition method provided in the embodiment of the present application. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 of the electronic device 100 is configured to execute the image acquisition method according to the present application by calling program instructions stored in the memory according to the obtained program instructions. For example: detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera; corresponding to the fact that no imaging target is detected in the image acquired by the first camera, and detecting that the imaging target exists in the first image acquired by the second camera according to the first acquisition parameters; and controlling the first camera to acquire according to the first acquisition parameters to obtain a second image comprising the imaging target, and displaying the second image. Also for example: detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera; corresponding to the fact that no imaging target is detected in the image acquired by the first camera, and the fact that the imaging target exists in the third image acquired by the second camera is detected; and switching the displayed acquisition picture of the first camera into the acquisition picture of the second camera, wherein an imaging target is displayed in the acquisition picture of the second camera.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, quantum dot LIGHT EMITTING diodes (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement capture functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of the acquired scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. But may also be touch keys such as acquisition keys for camera applications, etc. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 9 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows (Android runtime) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 8, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, dual card, and mobile networks. The user can wake up the camera application or the code scanning application to acquire images and record videos in real time.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 9, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
Android run time includes a core library and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Accordingly, an embodiment of the present application provides an electronic device, including: a memory for storing instructions for execution by one or more processors of the electronic device, and a processor for executing the instructions of the image acquisition method described above.
Accordingly, an embodiment of the present application provides a storage medium, where instructions are stored, where the instructions, when executed on an electronic device, cause the electronic device to execute the above-mentioned image capturing method.
Correspondingly, the embodiment of the application provides a chip which comprises a programmable logic circuit and/or program instructions, and when the chip runs, the image acquisition method is realized.
The present specification provides methods or process operational steps as illustrated in examples or flowcharts, but may include more or fewer operational steps based on conventional or non-inventive labor. The sequence of steps recited in the embodiments is only one manner of a plurality of execution sequences, and does not represent a unique execution sequence, and may be executed in parallel or in accordance with the method or flow sequence shown in the embodiments or the drawings (e.g., the environment of a parallel controller or a multithreaded process) when actually executed.
Embodiments of the present disclosure may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
As used herein, the term "module" may refer to, be part of, or include: a memory (shared, dedicated, or group) for running one or more software or firmware programs, an Application Specific Integrated Circuit (ASIC), an electronic circuit and/or processor (shared, dedicated, or group), a combinational logic circuit, and/or other suitable components that provide the described functionality.
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering is not required. Rather, in some embodiments, these features may be described in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or methodological feature in a particular drawing does not imply that all embodiments need to include such feature, and in some embodiments may not be included or may be combined with other features.
The embodiments of the present application have been described in detail above with reference to the accompanying drawings, but the use of the technical solution of the present application is not limited to the applications mentioned in the embodiments of the present application, and various structures and modifications can be easily implemented with reference to the technical solution of the present application to achieve the various advantageous effects mentioned herein. Various changes, which may be made by those skilled in the art without departing from the spirit of the application, are deemed to be within the scope of the application as defined by the appended claims.

Claims (18)

1. The image acquisition method is applied to electronic equipment and is characterized in that the electronic equipment comprises a first camera and a second camera which are different, the first camera is a main camera, the depth of field range of an image acquired by the first camera is different from that of the second camera, the first camera acquires based on a first focusing strategy, the second camera acquires based on a second focusing strategy, and the first focusing strategy and the second focusing strategy are different; and
The method comprises the following steps:
detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera;
detecting that an imaging target is not detected in the image acquired by the first camera, and detecting that the imaging target exists in a first image acquired by the second camera according to a first acquisition parameter, wherein the imaging target comprises a graphic code;
controlling the first camera to acquire according to the first acquisition parameters to obtain a second image comprising the imaging target, and displaying the second image;
and analyzing the imaging target in the second image.
2. The method of claim 1, wherein the first focus strategy is to acquire a plurality of images from near to far focus, and the second focus strategy is to acquire a plurality of images from far to near focus;
the distance between the imaging target and the electronic equipment is larger than a preset distance.
3. The method of claim 1, wherein the detecting that the second camera has an imaging target in a first image acquired with a first acquisition parameter comprises:
And detecting a plurality of images which are acquired by the second camera and have the acquisition range corresponding to that of the first camera, and detecting a first image of the imaging target.
4. The method of claim 1, wherein controlling the first camera to acquire according to the first acquisition parameter, obtaining a second image including the imaging target, and displaying the second image, comprises:
Determining the distance from the imaging target to the electronic equipment according to the first acquisition parameters;
Determining a second acquisition parameter required by the first camera to acquire the second image based on the distance from the imaging target to the electronic equipment;
And controlling the first camera to acquire the second acquisition parameters to obtain a second image comprising the imaging target.
5. The method of claim 4, wherein the first acquisition parameter and the second acquisition parameter are each a focus parameter.
6. The method according to claim 1, wherein the method further comprises:
and controlling the second camera to stop acquiring images corresponding to the detection of the imaging target in the images acquired by the first camera.
7. The method of claim 1, wherein a plurality of imaging targets are displayed in the second image, and further comprising:
a first imaging target selected by a user from a plurality of the imaging targets is identified.
8. The method of any one of claims 1-7, wherein the number of first cameras is one and the number of second cameras is a plurality,
The detecting that the imaging target exists in the first image acquired by the second camera according to the first acquisition parameter includes:
detecting that an imaging target exists in a first image acquired by one second camera of the plurality of second cameras according to the first acquisition parameters.
9. The method of any one of claims 1-7, wherein the first camera is a wide angle camera and the second camera is an ultra wide angle camera.
10. The image acquisition method is applied to electronic equipment and is characterized in that the electronic equipment comprises a first camera and a second camera which are different, the first camera is a main camera, the depth of field range of an image acquired by the first camera is different from that of the second camera, the first camera acquires based on a first focusing strategy, the second camera acquires based on a second focusing strategy, and the first focusing strategy and the second focusing strategy are different; and
The method comprises the following steps:
detecting an acquisition instruction, controlling the first camera and the second camera to acquire, and displaying an acquisition picture of the first camera;
Corresponding to the absence of an imaging target in an image acquired by the first camera and the presence of an imaging target in a third image acquired by the second camera, wherein the imaging target comprises a graphic code;
Switching the displayed acquisition picture of the first camera to the acquisition picture of the second camera, wherein the imaging target is displayed in the acquisition picture of the second camera;
and analyzing the imaging target in the third image.
11. The method of claim 10, wherein the acquisition frames of the second camera are:
and adjusting the obtained picture corresponding to the acquisition range of the first camera according to the image acquired by the second camera.
12. The method of claim 10, wherein the first focus strategy is to acquire a plurality of images from near to far focus and the second focus strategy is to acquire a plurality of images from far to near focus.
13. The method of claim 10, wherein the detecting that an imaging target is present in the third image acquired by the second camera comprises:
and detecting a plurality of images which are acquired by the second camera and have the acquisition range corresponding to that of the first camera, and detecting a third image of the imaging target.
14. The method of claim 10, wherein the number of imaging targets is plural, and a plurality of imaging targets are displayed in the acquisition frame of the second camera;
the analyzing the imaging target in the third image includes:
and analyzing the first imaging target selected by the user from the plurality of imaging targets.
15. The method of any one of claims 10-14, wherein the number of first cameras is one and the number of second cameras is a plurality,
The detecting that an imaging target exists in the third image acquired by the second camera includes:
detecting that an imaging target exists in a third image acquired by a target camera in the plurality of second cameras;
the switching to the acquisition picture of the second camera includes:
and switching to an acquisition picture of the target camera.
16. The method of any one of claims 10-14, wherein the first camera is a wide angle camera and the second camera is an ultra wide angle camera.
17. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, and
A processor, which when executing the instructions in the memory, causes the electronic device to perform the method of any one of claims 1-9 or 10-16.
18. A storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-9 or 10-16.
CN202311726713.1A 2023-12-15 2023-12-15 Image acquisition method, electronic equipment and storage medium Active CN117425071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311726713.1A CN117425071B (en) 2023-12-15 2023-12-15 Image acquisition method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311726713.1A CN117425071B (en) 2023-12-15 2023-12-15 Image acquisition method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117425071A CN117425071A (en) 2024-01-19
CN117425071B true CN117425071B (en) 2024-09-17

Family

ID=89526933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311726713.1A Active CN117425071B (en) 2023-12-15 2023-12-15 Image acquisition method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117425071B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105530421A (en) * 2014-09-30 2016-04-27 宇龙计算机通信科技(深圳)有限公司 Terminal, focusing method and device based on double cameras
CN107483822A (en) * 2017-08-28 2017-12-15 上海创功通讯技术有限公司 Focusing method, device and the electronic equipment of dual camera
CN112887615A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Shooting method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104410783B (en) * 2014-11-07 2018-02-02 广东欧珀移动通信有限公司 A kind of focusing method and terminal
CN106331484B (en) * 2016-08-24 2020-02-14 维沃移动通信有限公司 Focusing method and mobile terminal
CN110519503B (en) * 2018-05-22 2021-06-22 维沃移动通信有限公司 Method for acquiring scanned image and mobile terminal
CN118138876A (en) * 2022-01-25 2024-06-04 荣耀终端有限公司 Method for switching cameras and electronic equipment
CN116663587A (en) * 2022-02-17 2023-08-29 荣耀终端有限公司 Two-dimensional code identification method and identification device
CN118590754A (en) * 2022-05-30 2024-09-03 荣耀终端有限公司 Camera switching method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105530421A (en) * 2014-09-30 2016-04-27 宇龙计算机通信科技(深圳)有限公司 Terminal, focusing method and device based on double cameras
CN107483822A (en) * 2017-08-28 2017-12-15 上海创功通讯技术有限公司 Focusing method, device and the electronic equipment of dual camera
CN112887615A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Shooting method and device

Also Published As

Publication number Publication date
CN117425071A (en) 2024-01-19

Similar Documents

Publication Publication Date Title
CN114205522B (en) Method for long-focus shooting and electronic equipment
KR102381713B1 (en) Photographic method, photographic apparatus, and mobile terminal
WO2021136050A1 (en) Image photographing method and related apparatus
CN108399349B (en) Image recognition method and device
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
US9489564B2 (en) Method and apparatus for prioritizing image quality of a particular subject within an image
US20230217097A1 (en) Image Content Removal Method and Related Apparatus
EP3544286B1 (en) Focusing method, device and storage medium
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN113919382B (en) Code scanning method and device
CN115209057B (en) Shooting focusing method and related electronic equipment
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
US11252341B2 (en) Method and device for shooting image, and storage medium
CN113824873A (en) Image processing method and related electronic equipment
WO2021185374A1 (en) Image capturing method and electronic device
EP4366289A1 (en) Photographing method and related apparatus
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
CN115004685A (en) Electronic device and method for displaying image at electronic device
CN117692771B (en) Focusing method and related device
CN116916151A (en) Shooting method, electronic device and storage medium
CN116668836B (en) Photographing processing method and electronic equipment
CN117425071B (en) Image acquisition method, electronic equipment and storage medium
CN110933314A (en) Focus-following shooting method and related product
CN114979458A (en) Image shooting method and electronic equipment
CN112422813A (en) Image blurring method, terminal device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant