CN117121499A - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
CN117121499A
CN117121499A CN202180095508.2A CN202180095508A CN117121499A CN 117121499 A CN117121499 A CN 117121499A CN 202180095508 A CN202180095508 A CN 202180095508A CN 117121499 A CN117121499 A CN 117121499A
Authority
CN
China
Prior art keywords
camera
image
distance
focusing
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180095508.2A
Other languages
Chinese (zh)
Inventor
菅原廉万
罗俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN117121499A publication Critical patent/CN117121499A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Abstract

An image processing method, comprising: the method includes obtaining depth information indicating a distance between a camera and a plurality of objects, the camera focusing on a subject in the plurality of objects and imaging the plurality of objects to obtain a camera image, changing a focusing distance, which is a distance between the camera and a focusing position of the camera, according to a first user operation, and generating a foreground on the camera image based on the depth information and the focusing distance.

Description

Image processing method and electronic device
Technical Field
The present disclosure relates to an image processing method and an electronic apparatus.
Background
In recent years, a technique for generating a foreground located in the foreground and background of a subject in images of a plurality of objects is applied to a camera image obtained by imaging the subject using a camera having a deep depth of field such as a smart phone.
When a subject is imaged using a camera having a deep depth of field such as a smart phone, an image that appears to be in focus from a near position to a far position can be obtained. Accordingly, in order to obtain an image in which only the subject is clear and the foreground and background of the subject are blurred, a foreground is generated by image processing.
In this technique for generating a foreground, the foreground is generated based on depth information including a distance between a camera and a subject.
However, in the conventional art, the focusing distance (i.e., the distance between the camera and the focusing position of the camera) is fixed when imaging the subject.
Therefore, it is difficult to generate a foreground in an area suitable for user preference by a simple operation.
Disclosure of Invention
The present disclosure is directed to solving at least one of the above-mentioned technical problems. Accordingly, there is a need for providing an imaging lens assembly, a camera module, and an imaging apparatus.
According to the present disclosure, an image processing method includes:
obtaining depth information indicative of a distance between a camera and a plurality of objects, the camera focusing on a subject of the plurality of objects and imaging the plurality of objects to obtain a camera image;
changing a focusing distance according to a first user operation, the focusing distance being a distance between the camera and a focusing position of the camera; and
generating a foreground on a camera image based on depth information and focus distance
In one example, the image processing method may further include: and changing the intensity of the foreground according to the second user operation.
In one example, the first user operation may be an operation of a first slider displayed by a display device that displays a camera image.
In one example, the second user operation may be an operation of a second slider displayed by a display device that displays a camera image.
In one example, the camera image may be a still image.
In one example, the camera image may be a moving image.
In one example, generating the foreground may be performed while imaging the plurality of objects.
In one example, generating the foreground may be performed after imaging the plurality of objects and may be performed based on camera images stored in memory.
In one example, the first user operation may be a single press operation of a button displayed by a display device that displays the camera image.
In one example, the camera image may be a moving image, and
in response to a pressing operation during imaging of a moving image, changing the focusing distance may be performed within a predetermined range of the focusing distance.
In one example, the predetermined range may be a range from a minimum value to a maximum value of the focusing distance.
In one example, the image processing method may further include: selecting a plurality of targets from a camera image, which is displayed by a display device, according to a third user operation, wherein,
changing the focusing distance may be performed so as to focus on the selected plurality of targets one by one.
In one example, the third operation may be tapping a plurality of targets in the camera image.
In one example, the image processing method may further include: tracking the selected plurality of targets, wherein
Changing the focusing distance may be performed based on the tracking results of the selected plurality of targets.
According to the present disclosure, an electronic device includes:
a camera focusing on a subject of the plurality of objects and imaging the plurality of objects to obtain a camera image; and
a processor, wherein the processor is configured to:
depth information indicating a distance between the camera and the plurality of objects is acquired,
changing a focusing distance, which is a distance between the camera and a focusing position of the camera, according to a first user operation, and
a foreground is generated on the camera image based on the depth information and the focus distance.
Drawings
The foregoing and other aspects and advantages of embodiments of the application will become more apparent and more readily appreciated from the following detailed description, taken in conjunction with the accompanying drawings, wherein:
fig. 1 is a diagram showing a configuration example of an electronic device capable of implementing an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure;
fig. 3 is a diagram showing setting of a focusing distance on a short distance side in the image processing method shown in fig. 2;
fig. 4 is a diagram showing setting of a focusing distance on a long distance side in the image processing method shown in fig. 2;
fig. 5 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure;
fig. 7 is a diagram showing a change of the focusing distance from the minimum value to the maximum value in the image processing method shown in fig. 6;
fig. 8 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure;
fig. 9 is a diagram showing a plurality of objects in the image processing method shown in fig. 8;
fig. 10 is a diagram showing a change in focus distance according to a plurality of targets in the image processing method shown in fig. 8; and
fig. 11 is a diagram showing a tracking target in the image processing method shown in fig. 8.
Detailed Description
Reference will now be made in detail to embodiments of the present disclosure and examples of which are illustrated in the accompanying drawings. Throughout the specification, identical or similar elements and elements having identical or similar functions are denoted by identical reference numerals. The embodiments described in the present disclosure with reference to the drawings are illustrative, intended to be in the explanation of the disclosure, and should not be construed as limiting the disclosure.
(first embodiment)
Fig. 1 is a diagram showing a configuration example of an electronic device capable of implementing an image processing method according to a first embodiment of the present disclosure. In the example shown in fig. 1, the electronic apparatus 100 includes a stereoscopic camera module 10 as an example of a camera, a distance sensor module 20, and an image signal processor 30 as an example of a processor. The image signal processor 30 controls the stereoscopic camera module 10 and the distance sensor module 20, and processes camera image data acquired from the stereoscopic camera module 10.
In the example shown in fig. 1, the stereoscopic camera module 10 includes a master camera module 11 as an example of a camera and a slave camera module 12 for binocular stereoscopic viewing. The main camera module 11 includes a first lens 11a capable of focusing on a subject (subject), a first image sensor 11b detecting an image input via the first lens 11a, and a first image sensor driver 11c driving the first image sensor 11 b. The main camera module 11 focuses on a subject of a plurality of objects (objects) within the angle of view of the main camera module 11, and images the plurality of objects to acquire a main camera image as an example of a camera image.
In the example shown in fig. 1, the slave camera module 12 includes a second lens 12a capable of focusing on a subject, a second image sensor 12b detecting an image input via the second lens 12a, and a second image sensor driver 12c driving the second image sensor 12 b. The slave camera module 12 focuses on a subject of the plurality of objects and images the plurality of objects to acquire a slave camera image.
As shown in fig. 1, the distance sensor module 20 includes a lens 20a, a distance sensor 20b, a distance sensor driver 20c, and a projector 20d. The projector 20d emits pulsed light to a plurality of objects including a main body, and the distance sensor 20b detects reflected light from the plurality of objects through the lens 20 a. The distance sensor module 20 acquires time of flight (ToF) depth information (ToF depth value) based on the time from transmitting the pulsed light to receiving the reflected light. The resolution of the ToF depth information detected by the distance sensor module 20 is lower than the resolution of stereoscopic depth information of a stereoscopic image acquired based on the master camera image and the slave camera image.
The image signal processor 30 controls the master camera module 11, the slave camera module 12, and the distance sensor module 20. The image signal processor 30 generates a background (boot) on the main camera image based on the main camera image, the sub camera image, and the ToF depth information. Specifically, the image signal processor 30 may correct stereoscopic depth information, which is obtained by performing stereoscopic processing on a master camera image and a slave camera image, for example, based on ToF depth information. The stereoscopic depth information may indicate a value corresponding to a deviation in a horizontal direction (x-direction) between corresponding pixels of the master camera image and the slave camera image. The image signal processor 30 may generate a foreground based on the corrected stereoscopic depth information. A gaussian filter having a standard deviation σ corresponding to the corrected stereoscopic depth information may be used to generate Cheng Sanjing.
Further, as shown in fig. 1, the electronic device 100 includes a Global Navigation Satellite System (GNSS) module 40, a wireless communication module 41, a CODEC (CODEC) 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial navigation unit (IMU) 47, a main processor 48, and a memory 49.
The GNSS module 40 measures the current position of the electronic device 100. The wireless communication module 41 performs wireless communication with the internet. The codec 42 bidirectionally performs encoding and decoding using a predetermined encoding/decoding method. The speaker 43 outputs sound based on the sound data decoded by the codec 42. The microphone 44 outputs sound data to the codec 42 based on the input sound. The display module 45 displays predefined information. The input module 46 receives input from a user. The IMU 47 detects angular velocity and acceleration of the electronic device 100.
The host processor 48 controls the GNSS module 40, the wireless communication module 41, the codec 42, the speaker 43, the microphone 44, the display module 45, the input module 46 and the IMU 47. The memory 49 stores programs and data required for the image signal processor 30 to control the stereo camera module 10 and the distance sensor module 20, acquired image data, and programs and data required for the main processor 48 to control the electronic device 100.
The memory 49 includes a computer-readable storage medium having stored thereon a computer program that is executed by the image signal processor 30 or the main processor 48 to implement the image processing method of the present disclosure.
For example, the image processing method includes: depth information indicating a distance between a camera and a plurality of objects is acquired, the camera focusing on a subject of the plurality of objects and imaging the plurality of objects to acquire a camera image. The method further comprises the steps of: a focus distance (focus distance), which is a distance between a camera and a focus position (in-focus position) of the camera, is changed according to a first user operation. The method further comprises the steps of: a foreground is generated on the camera image based on the depth information and the focus distance.
In the present embodiment, the electronic device 100 having the above-described configuration is a mobile phone such as a smart phone, but may be other types of electronic devices including the camera modules 11 and 12.
Next, an image processing method according to a first embodiment of the present disclosure will be described with reference to fig. 2 to 4. Fig. 2 is a flowchart illustrating an image processing method according to a first embodiment of the present disclosure. Fig. 3 is a diagram showing setting of a focusing distance on the short distance (near) side in the image processing method shown in fig. 2. Fig. 4 is a diagram showing that a focusing distance is set on a long distance (far) side in the image processing method shown in fig. 2.
After starting recording of a moving image of a subject, the image signal processor 30 first inputs a master camera image and a slave camera image from the stereo camera module 10 (step S1).
Next, the image signal processor 30 acquires depth information indicating the distances between the master camera module 11 and the plurality of objects within the angle of view of the master camera module 11 based on the input master camera image and the input slave camera image (step S2). As described above, depth information can be obtained by correcting stereoscopic depth information based on ToF depth information. In acquiring depth information, a subject and objects outside the subject may be classified by matting (segmentation) using AI technology, and depth information may be acquired for each classified object. However, the specific mode of the depth information is not particularly limited as long as it indicates the distance between the main camera module 11 and a plurality of objects located within the view angle of the main camera module 11.
Next, the image signal processor 30 acquires the positions of the first slider SL1 and the second slider SL2 shown in fig. 3 (step S3). As shown in fig. 3, the first slider SL1 is a Graphical User Interface (GUI) displayed by the display module 45. The image signal processor 30 changes the focus distance indicating the distance between the main camera module 11 and the focus position of the main camera module 11 in response to the left-right direction drag operation (first user operation) of the first slider SL 1. By changing the focusing distance, the image signal processor 30 changes the subject to be focused, and changes the area in which the foreground is generated. The variable range of the focusing distance may correspond to a range of distances between the main camera module 11 and the plurality of objects indicated by the depth information. As shown in fig. 3, the second slider SL2 is a Graphical User Interface (GUI) displayed by the display module 45. The image signal processor 30 changes the intensity of the foreground in response to a vertical direction drag operation (second user operation) of the second slider SL 2. The image signal processor 30 may change the intensity of the foreground, for example, by changing the standard deviation σ of the gaussian filter according to the displacement of the second slider SL 2.
Next, the image signal processor 30 updates the focusing distance and the intensity of the foreground based on the acquired positions of the first slider SL1 and the second slider SL2 (step S4).
Next, the image signal processor 30 generates a foreground on the main camera image based on the focusing distance and the intensity of the foreground (step S5). Specifically, the image signal processor 30 generates a scene on an image of an object other than the subject located at a position of a focal distance (focal distance), the intensity of which corresponds to the position of the second slider SL 2. On the other hand, the image signal processor 30 does not generate a foreground on the image of the subject. For example, in fig. 3, since the focusing distance is set on the short distance side, the person 101 is set as the focusing body. As a result, in fig. 3, a foreground is generated on the image of the background 102 of the person 101. On the other hand, in fig. 4, since the focusing distance is set on the long distance side, the background 102 is set as the focusing body. As a result, in fig. 4, a scene is generated on the image of the person 101.
Next, the image signal processor 30 determines whether or not an instruction to stop recording is issued (step S6).
When an instruction to stop recording is issued (step S6: yes), the image signal processor 30 saves the recorded moving image data in the memory 49 to end the processing (step S7).
On the other hand, when no instruction to stop recording is issued (step S6: NO), the image signal processor 30 repeats the input of the master camera image and the input of the slave camera image (step S1).
Although fig. 2 to 4 describe an example of imaging a moving image of a subject, the first embodiment may also be applied to imaging a still image of a subject.
According to the first embodiment, by changing the focusing distance by operating the first slider SL1 (first user operation), a foreground can be generated in an area suitable for user preference by a simple operation. Further, according to the first embodiment, by operating the second slider SL2 (second user operation), a scene can be generated with an intensity suitable for user preference by a simple operation.
(second embodiment)
Next, with reference to fig. 5, an image processing method according to a second embodiment of the present disclosure will be described, focusing on differences from the first embodiment.
Fig. 5 is a flowchart illustrating an image processing method according to a second embodiment of the present disclosure. In the first embodiment, an example has been described in which a foreground is generated on a main camera image when a subject is imaged. In contrast, in the second embodiment, after imaging the subject, a foreground is generated on the main camera image stored in the memory 49.
In the flowchart of fig. 5, it is assumed that the main camera image is stored in the memory 49 in association with the depth information corresponding to the main camera image.
On the premise of this, first, the image signal processor 30 reads the main camera image selected by the user and the depth information corresponding to the main camera image from the memory 49 (step S11). The main camera image may be a moving image or a still image.
As shown in fig. 2, after step S11, steps S3-S5 are performed. Since the depth information is stored in the memory 49, the acquisition of the depth information described in step S2 of fig. 2 is not performed.
After step S5, the image signal processor 30 determines whether or not a foreground image determination operation (e.g., a shutter button press operation) is performed (step S61).
When the foreground image determining operation is performed (step S61: yes), the image signal processor 30 saves the determined blurred image in the memory 49 (step S71).
On the other hand, when the foreground image determining operation is not performed (step S61: NO), the image signal processor 30 reads the main camera image and depth information newly selected by the user from the memory 49 (step S11).
According to the second embodiment, even after imaging the subject, a foreground can be generated in an area suitable for user preference by a simple operation.
(third embodiment)
Next, with reference to fig. 6 and 7, an image processing method according to a third embodiment of the present disclosure will be described, focusing on differences from the first embodiment.
Fig. 6 is a flowchart illustrating an image processing method according to a third embodiment of the present disclosure. Fig. 7 is a diagram showing a change of the focusing distance from the minimum value to the maximum value in the image processing method shown in fig. 6.
In the first embodiment, an example of changing the focusing distance according to the operation of the first slider SL1 has been described. In contrast, in the third embodiment, the focusing distance is changed from the minimum value to the maximum value according to a single pressing operation (first user operation) of the shutter button.
Specifically, as shown in fig. 6, after the depth information is acquired (step S2), the image signal processor 30 determines whether the shutter button B is pressed (step S21). As shown in fig. 7, the shutter button B is a Graphical User Interface (GUI) displayed by the display module 45.
When the shutter button B is pressed (step S21: yes), the image signal processor 30 starts to automatically change the focusing distance from the minimum value to the maximum value (step S22). For example, automatically changing the focusing distance is performed gradually over a certain period of time.
On the other hand, when the shutter button B is not pressed (step S21: no), the image signal processor 30 repeatedly determines whether the shutter button B is pressed (step S21).
Next, in the process of automatically changing the focusing distance, the image signal processor 30 generates a foreground on an image of an object other than a subject in the main camera image, the subject corresponding to the current focusing distance (step S5).
Next, the image signal processor 30 determines whether the focusing distance has reached the maximum value (step S23).
When the focusing distance has reached the maximum value (step S23: yes), the image signal processor 30 ends the automatic change of the focusing distance, and proceeds to step S6 described in fig. 2.
On the other hand, when the focusing distance has not reached the maximum value (step S23: NO), the image signal processor 30 continues to automatically change the focusing distance (step S24).
According to the third embodiment, as shown in fig. 7, a moving image can be imaged while gradually changing the area where a landscape is generated by pressing the shutter button B only once.
(fourth embodiment)
Next, with reference to fig. 8 to 11, an image processing method according to a fourth embodiment of the present disclosure will be described focusing on differences from the first embodiment.
Fig. 8 is a flowchart illustrating an image processing method according to a fourth embodiment of the present disclosure. Fig. 9 is a diagram showing a plurality of objects using the image processing method shown in fig. 8. Fig. 10 is a diagram showing a change in focus distance according to a plurality of targets using the image processing method shown in fig. 8. Fig. 11 is a diagram showing tracking of an object using the image processing method shown in fig. 8.
In the fourth embodiment, the focusing distance is automatically changed to focus one by one on a plurality of targets selected by the user from the main camera image.
Specifically, as shown in fig. 8, the image signal processor 30 selects a plurality of targets on which a tap operation (third user operation) is performed by the user in the main camera image displayed by the display module 45 (step S31). In the example shown in fig. 9, a first target T1 and a second target T2 are selected.
Next, the image signal processor 30 acquires depth information in the same manner as in fig. 2 (step S2).
Next, the image signal processor 30 tracks each of the selected plurality of targets using multi-object tracking (multi-object tracking) (step S32). In tracking the target, the image signal processor 30 updates distance information indicating the distance between the main camera module 11 and the target. The distance information may be acquired based on the aforementioned stereoscopic camera image or ToF depth information. In the example shown in fig. 11, the first target T1 that moves as time goes from 0 (T) to N (T) is tracked.
Next, the image signal processor 30 automatically changes the focusing distance to focus on the selected targets T1, T2 one by one (step S33). For example, automatically changing the focusing distance is performed gradually over a certain period of time.
Next, the image signal processor 30 generates a foreground on an image of a target other than the currently focused target (step S5). In the example shown in fig. 10, the first target T1 and the second target T2 are focused one by one with the lapse of time. That is, in the example shown in fig. 10, the image of the second target T2 and the image of the first target T1 are blurred one by one with the lapse of time.
According to the fourth embodiment, during imaging of a moving image, an image of an object on which a foreground is generated can be changed while focusing is performed one by one on a plurality of objects.
In describing embodiments of the present disclosure, it should be understood that terms such as "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise" and "counterclockwise" should be construed to refer to the directions or locations described or illustrated in the discussed drawings. These relative terms are only used to simplify the description of the present disclosure and do not denote or imply that the referenced devices or elements must have a particular orientation, or must be constructed or operated in a particular orientation. Accordingly, these terms should not be construed as limiting the present disclosure.
Furthermore, terms such as "first" and "second" are used herein for descriptive purposes and are not intended to indicate or imply relative importance or significance or the number of technical features indicated. Thus, features defined as "first" and "second" may include one or more of the features. In the description of the present disclosure, "a plurality" means "two or more than two" unless otherwise indicated.
In describing embodiments of the present disclosure, unless otherwise indicated or limited, the terms "mounted," "connected," "coupled," and the like are used broadly and may be, for example, fixedly connected, detachably connected, or integrally connected, or mechanically or electrically connected; or may be a direct connection or an indirect connection via an intermediate structure, or may be internal communication between two elements as will be understood by those skilled in the art in view of the specific circumstances.
In embodiments of the present disclosure, unless otherwise indicated or limited, structures in which a first feature is "on" or "under" a second feature may include embodiments in which the first feature is in direct contact with the second feature, and may also include embodiments in which the first feature and the second feature are not in direct contact with each other but are contacted by additional features formed therebetween. Furthermore, a first feature "on", "over" or "on top of" a second feature may include the following embodiments: the first feature being "on", "above" or "top" of the second feature, orthogonally or obliquely, or simply meaning that the height of the first feature is higher than the height of the second feature; while a first feature "under", "below" or "bottom" a second feature may include the following embodiments: the first feature is "under", "below" or "bottom" the second feature, either orthogonally or obliquely, or simply means that the height of the first feature is lower than the height of the second feature.
Various embodiments and examples are provided in the above description to implement different structures of the present disclosure. To simplify the present disclosure, certain elements and arrangements are described above. However, these elements and arrangements are merely examples and are not intended to limit the present disclosure. Further, in various examples of the present disclosure, reference numerals and/or letters may be repeated. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations. In addition, examples of different processes and materials are provided in this disclosure. However, those skilled in the art will appreciate that other processes and/or materials may also be applied.
Reference throughout this specification to "one embodiment," "some embodiments," "one example embodiment," "one example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above-identified phrases in various places throughout this specification are not necessarily all referring to the same embodiment or example of the disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in the flow diagrams or otherwise described herein can be understood to include one or more modules, code segments, or portions of code that are executable instructions for implementing specific logical functions or steps in the process, and the scope of the preferred embodiments of the present disclosure includes other implementations, as will be understood by those skilled in the art that such functions may be implemented in a different order than shown or discussed, including in substantially the same order or in a reverse order.
The logic and/or steps described elsewhere herein or shown in a flowchart, for example, a particular sequence of executable instructions for implementing the logic function, may be embodied in or used in connection with any computer readable medium (e.g., a computer-based system, a system including a processor, or other systems capable of obtaining instructions from an instruction execution system, apparatus, and device executing the instructions) to be used by the instruction execution system, apparatus, or device. For the purposes of this description, a "computer-readable medium" can be any means that can be used in connection with or that can be adapted to contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples of the computer-readable medium include, but are not limited to: an electronic connection (electronic device) having one or more wires, a portable computer accessory (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a fiber optic device, and a portable compact disc read-only memory (CDROM). Furthermore, the computer readable medium may even be paper or other suitable medium on which the program can be printed, because: for example, paper or other suitable medium may be optically scanned, then edited, decrypted, or otherwise processed as necessary to electronically obtain a program, which is then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, in another embodiment as well, these steps or methods may be implemented by one or a combination of the following techniques, which are known in the art: discrete logic circuits having logic gates for implementing logic functions for data signals, application specific integrated circuits having appropriately combined logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
It will be appreciated by those skilled in the art that all or part of the steps of the above-described example methods of the present disclosure may be implemented by program instruction-related hardware, and the program may be stored in a computer-readable storage medium, which when run on a computer comprises one or more steps of the method embodiments of the present disclosure.
In addition, the various functional units of the disclosed embodiments may be integrated in one processing module, or the units may be physically separate, or two or more units may be integrated in one processing module. The integrated module may be implemented in hardware or in software functional modules. When the integrated module is implemented in the form of a software functional module and sold or used as a stand-alone product, the integrated module may be stored in a computer-readable storage medium.
The storage medium may be a read-only memory, a magnetic disk, a CD, or the like.
Although embodiments of the present disclosure have been shown and described, it will be understood by those skilled in the art that these embodiments are illustrative and should not be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations can be made to the embodiments without departing from the scope of the disclosure.

Claims (15)

1. An image processing method, the method comprising:
obtaining depth information indicative of a distance between a camera and a plurality of objects, the camera focusing on a subject of the plurality of objects and imaging the plurality of objects to obtain a camera image;
changing a focusing distance according to a first user operation, the focusing distance being a distance between the camera and a focusing position of the camera; and
a foreground is generated on the camera image based on the depth information and the focus distance.
2. The method according to claim 1, wherein the method further comprises: and changing the intensity of the foreground according to the second user operation.
3. The method according to claim 1 or 2, wherein the first user operation is an operation of a first slider displayed by a display device, the display device displaying the camera image.
4. The method of claim 2, wherein the second user operation is an operation of a second slider displayed by a display device that displays the camera image.
5. The method of claim 1, wherein the camera image is a still image.
6. The method of claim 1, wherein the camera image is a moving image.
7. The method of claim 1, wherein the generating a scene is performed while imaging the plurality of objects.
8. The method of claim 1, wherein the generating the foreground is performed after imaging the plurality of objects and is performed based on camera images stored in a memory.
9. The method of claim 1 or 2, wherein the first user operation is a single press operation of a button displayed by a display device that displays the camera image.
10. The method according to claim 9, wherein:
the camera image is a moving image, and
the changing of the focusing distance is performed within a predetermined range of the focusing distance in response to the pressing operation during imaging of the moving image.
11. The method of claim 10, wherein the predetermined range is a range from a minimum value to a maximum value of the focusing distance.
12. The method according to claim 1, wherein the method further comprises: selecting a plurality of targets from the camera image according to a third user operation, the camera image being displayed by a display device, wherein,
the changing of the focusing distance is performed so as to focus on the selected plurality of targets one by one.
13. The method of claim 12, wherein the third operation is tapping the plurality of targets in the camera image.
14. The method according to claim 12 or 13, characterized in that the method further comprises: tracking the selected plurality of targets, wherein,
the changing of the focusing distance is performed based on a tracking result of the selected target.
15. An electronic device, comprising:
a camera focusing on a subject of a plurality of objects and imaging the plurality of objects to acquire a camera image; and
a processor, wherein the processor is configured to:
depth information indicating a distance between the camera and the plurality of objects is acquired,
changing a focusing distance according to a first user operation, the focusing distance being a distance between the camera and a focusing position of the camera, and
a foreground is generated on the camera image based on the depth information and the focus distance.
CN202180095508.2A 2021-03-08 2021-03-08 Image processing method and electronic device Pending CN117121499A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/079594 WO2022188007A1 (en) 2021-03-08 2021-03-08 Image processing method and electronic device

Publications (1)

Publication Number Publication Date
CN117121499A true CN117121499A (en) 2023-11-24

Family

ID=83227311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180095508.2A Pending CN117121499A (en) 2021-03-08 2021-03-08 Image processing method and electronic device

Country Status (2)

Country Link
CN (1) CN117121499A (en)
WO (1) WO2022188007A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103188434B (en) * 2011-12-31 2017-05-24 联想(北京)有限公司 Method and device of image collection
KR20140007529A (en) * 2012-07-09 2014-01-20 삼성전자주식회사 Apparatus and method for taking a picture in camera device and wireless terminal having a camera device
CN105052124A (en) * 2013-02-21 2015-11-11 日本电气株式会社 Image processing device, image processing method and permanent computer-readable medium
US9363499B2 (en) * 2013-11-15 2016-06-07 Htc Corporation Method, electronic device and medium for adjusting depth values
US9779484B2 (en) * 2014-08-04 2017-10-03 Adobe Systems Incorporated Dynamic motion path blur techniques
EP3462410A1 (en) * 2017-09-29 2019-04-03 Thomson Licensing A user interface for manipulating light-field images

Also Published As

Publication number Publication date
WO2022188007A1 (en) 2022-09-15

Similar Documents

Publication Publication Date Title
JP7007497B2 (en) Distance measurement methods, intelligent control methods and devices, electronic devices and storage media
CN109313267B (en) Ranging system and ranging method
US8836760B2 (en) Image reproducing apparatus, image capturing apparatus, and control method therefor
US9313419B2 (en) Image processing apparatus and image pickup apparatus where image processing is applied using an acquired depth map
CN104427260A (en) Moving image generation system and moving image generation method
US20140362253A1 (en) Beamforming method and apparatus for sound signal
KR20170000773A (en) Apparatus and method for processing image
KR20160129779A (en) Method for obtaining light-field data using a non-light-field imaging device, corresponding device, computer program product and non-transitory computer-readable carrier medium
CN101621625A (en) Image pickup apparatus and method of controlling the same
KR102472156B1 (en) Electronic Device and the Method for Generating Depth Information thereof
JP2016134001A (en) Image generation apparatus, image generation method, and program
TW201541141A (en) Auto-focus system for multiple lens and method thereof
US20200062173A1 (en) Notification control apparatus and method for controlling notification
CN113647094A (en) Electronic device, method, and computer-readable medium for providing out-of-focus imaging effects in video
CN105549299B (en) Control method, control device and electronic device
JP2023145588A (en) Information processing device, information processing method, and program
US9247125B2 (en) Auxiliary light projection apparatus, flash apparatus, and photographing apparatus
JP2014211323A (en) Image processing apparatus, raindrop detection apparatus, and program
KR101470230B1 (en) Parking area tracking apparatus and method thereof
CN112073601B (en) Focusing method, device, imaging equipment and computer readable storage medium
CN117121499A (en) Image processing method and electronic device
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
JP2007017246A (en) Distance measuring system
CN114514735B (en) Electronic apparatus and method of controlling the same
JP2006276964A (en) Display device for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination