WO2023020190A1 - 一种全景深图像合成方法、存储介质及智能手机 - Google Patents

一种全景深图像合成方法、存储介质及智能手机 Download PDF

Info

Publication number
WO2023020190A1
WO2023020190A1 PCT/CN2022/106869 CN2022106869W WO2023020190A1 WO 2023020190 A1 WO2023020190 A1 WO 2023020190A1 CN 2022106869 W CN2022106869 W CN 2022106869W WO 2023020190 A1 WO2023020190 A1 WO 2023020190A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
portrait
neural network
synthesis method
network
Prior art date
Application number
PCT/CN2022/106869
Other languages
English (en)
French (fr)
Inventor
朱晓璞
Original Assignee
惠州Tcl云创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 惠州Tcl云创科技有限公司 filed Critical 惠州Tcl云创科技有限公司
Publication of WO2023020190A1 publication Critical patent/WO2023020190A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the technical field of image synthesis, and in particular to a full-depth image synthesis method, a storage medium, and a smart phone.
  • the front selfie cameras of smart phone devices are all equipped with an autofocus function, which can effectively improve the clarity of faces when taking pictures.
  • an autofocus function which can effectively improve the clarity of faces when taking pictures.
  • the background image will be outside the effective depth-of-field, resulting in a blurred background.
  • the technical problem to be solved in this application is to provide a panoramic depth image synthesis method, a storage medium, and a smart phone in view of the deficiencies in the prior art, aiming to solve the problem that the background image will be blurred when the focus point of the existing front camera is on the face. It is outside the effective depth of field, resulting in the blurring of the background.
  • a kind of full depth image synthesis method wherein, comprises steps:
  • the focus is locked to the face and the farthest lens respectively, and the first image and the second image are acquired correspondingly;
  • the full depth image synthesis method wherein the first image is a focused image of a human face, and the step of obtaining the focused image of a human face includes:
  • the face focus mode is started to take pictures, and a face focus image is obtained.
  • the method for synthesizing a panoramic depth image, wherein the second image is a distant view focus image, and the step of obtaining the distant view focus image includes:
  • the perspective focus mode is activated to take pictures to obtain a perspective focus image.
  • the full depth image synthesis method wherein, also includes the steps:
  • a close-up focusing mode is started to obtain a close-up image.
  • the method for synthesizing a full depth image, wherein the first image is subjected to portrait segmentation processing, and the step of obtaining a portrait area image includes:
  • the image segmentation neural network is trained by using the labeled image data containing human faces as a training sample, and the trained image segmentation neural network is obtained;
  • the image segmentation neural network is trained using labeled image data containing human faces as training images, and the steps of obtaining the trained image segmentation neural network include:
  • the image segmentation neural network is trained to obtain a trained image segmentation neural network.
  • the full depth image synthesis method wherein the first image is input into the trained image segmentation neural network for image segmentation, and the step of obtaining the image of the portrait area includes:
  • the method for synthesizing a full depth image wherein the step of aligning and fusing the image of the portrait area with the second image to obtain a fused image includes:
  • a storage medium where one or more programs are stored in the storage medium, and the one or more programs can be executed by one or more processors to implement the steps in the full-depth image synthesis method of the present application.
  • a smart phone which includes a processor adapted to implement instructions; and a storage medium adapted to store a plurality of instructions, and the instructions are adapted to be loaded by the processor and execute the steps in the full-depth image synthesis method of the present application.
  • This application proposes a panoramic depth image synthesis method.
  • the focus is locked to the face and the farthest lens, respectively, and the first image (face focus image) and the second image (distance focus image) are obtained correspondingly; for The first image is subjected to portrait segmentation processing to obtain a portrait area image; the portrait area image is aligned and fused with the second image to obtain a fused image.
  • This application uses two focus shots, one with portrait focus and one foreground focus, and the AI portrait segmentation of the face focus image and fusion to the perspective focus image, so as to achieve clear portraits and backgrounds within the depth of the panorama.
  • FIG. 1 is a flow chart of a method for synthesizing a full depth image.
  • FIG. 2 is a functional block diagram of a smart phone of the present application.
  • FIG. 1 is a flow chart of a preferred embodiment of a full-depth image synthesis method provided by the present application. As shown in the figure, it includes steps:
  • the first image (the face focus image) and the second image (the perspective focus image) are obtained by taking two focus shots respectively with the face focus and the distance focus image, and then the face focus image is
  • the AI portrait is segmented and fused to the distant focus image to obtain a fused image, so that both the portrait and the background are clear within the depth of the panorama.
  • the first image is a focused image of a human face
  • the step of acquiring the focused image of a human face includes: starting the camera and detecting whether there is face data under the preview lens of the camera; If the face data is included, start the face focus mode to take pictures, and obtain a face focus image; if it is detected that the camera preview lens does not contain face data, then start the close-up focus mode to obtain a close-up image.
  • the second image is a perspective focus image
  • the step of obtaining the perspective focus image includes: after starting the face focus mode to take pictures to obtain the face focus image, and then starting the perspective focus mode to take pictures to obtain A distant focus image.
  • the long-range focusing mode generally refers to advancing the focus to the farthest point of the lens, and taking pictures after the focus converges.
  • the step S20, performing portrait segmentation processing on the first image, and obtaining a portrait area image specifically includes:
  • This embodiment proposes an end-to-end trainable neural network to implement image segmentation processing, and this end-to-end trainable neural network is called an image segmentation neural network.
  • the image segmentation neural network of the embodiment of the present application is used to segment the portrait in the first image (face focus image), which mainly includes the following two steps: 1) Divide the first image into a plurality of sections, and predict each section The ROI that covers all portraits in the segment, here, ROI refers to the area that needs to be processed is outlined in the image to be processed in the form of a box, circle, ellipse, irregular polygon, etc., so as to perform subsequent processing in 2); 2) Extract the features in each ROI, and accurately locate the position of each portrait.
  • the image segmentation neural network of the embodiment of the present application can be applied but not limited to the following scenarios:
  • Scenario 1 The user transmits the collected first image including the portrait to the cloud through the network, and the cloud uses the image segmentation neural network of the embodiment of the application to segment the image.
  • Scenario 2 The user inputs the collected first image including portraits into a local computer device, and the computer device uses the image segmentation neural network of the embodiment of the present application to segment the image.
  • the image segmentation neural network is trained by using labeled image data containing human faces as the training image, and the step of obtaining the trained image segmentation neural network includes:
  • the larger the number of training images the better the training result of the image segmentation neural network; on the other hand, the larger the number of training images, the more computer resources need to be consumed.
  • hundreds or more images containing human faces can be prepared as training images.
  • it is necessary to obtain the position labeling information of the portraits to be segmented in the training images for example: use the pixels corresponding to the portraits to be segmented with It can be represented in any way, such as marking different portraits with different colors, and the position marking information of the portraits can be marked by a person who can recognize the portraits through a graphic editing tool.
  • converting the position labeling information into the required format includes but is not limited to heat map , coordinate points, etc.
  • the image segmentation neural network includes at least a first sub-network, a second sub-network and a third sub-network, the feature map of the training image is obtained by using the first sub-network, and the feature map of the training image is obtained by using the second sub-network
  • the sub-network processes the feature map of the training image to obtain a target area where at least one portrait in the training image is located, and uses the third sub-network to obtain position information of the portrait to be segmented in the target area.
  • the structure of the first sub-network is not limited. Taking the segmentation of the retinal neural layer in the OCT image and using the VGG16 convolutional neural network as an example, the training image is processed through the conv1 layer to the conv5 layer of the network. A feature map of W ⁇ H ⁇ C is obtained, where W ⁇ H is the spatial size of the feature map, and C is the number of channels of the feature map.
  • the sample image is divided into a plurality of sections according to the target direction by the second sub-network, and the target direction includes at least a vertical direction or a horizontal direction;
  • a section determining the ROI corresponding to the at least one portrait in any section, the ROI is determined by a first boundary and a second boundary, and the direction of the first boundary and the second boundary is perpendicular to the Target direction: based on the ROIs in the plurality of sections, determine a target area where at least one portrait is located.
  • the plurality of sections may be equal-width sections arranged vertically, or equal-width sections arranged horizontally; through the first sub-network to predict the area covering all portraits in each equal-width section of the image, as an ROI.
  • methods for predicting ROI include but are not limited to regression prediction heat map, regression prediction coordinates, and sliding window prediction.
  • feature extraction is performed on the ROI in any one of the multiple sections through the third sub-network, and a feature with a fixed height is generated based on the feature extraction result.
  • a vector based on the feature vector of the fixed height, obtain the position information of the portrait to be segmented in any segment.
  • the features of the ROI region are extracted from the feature map through the ROIAlign layer or ROI Pooling layer, and the features are mapped to a feature vector of a fixed height, thereby predicting the precise position of the portrait in each ROI region.
  • the way to predict the precise position of the portrait Including but not limited to regression prediction heat map, regression prediction coordinates, sliding window prediction.
  • the number of portraits to be segmented in the target area is one or more. If there are multiple portraits in the ROI, the precise position of each portrait is predicted separately.
  • the image segmentation neural network is trained based on the position information of the portrait to be segmented and the position annotation information of the portrait to be segmented to obtain a trained image segmentation neural network.
  • the position information of the portrait predicted in the above steps is input to the loss layer, and the loss layer can adjust the parameter value of the image segmentation neural network according to the position information of the predicted portrait, thereby adjusting the parameter value of the image segmentation neural network to train.
  • a first loss function value is obtained; determining whether the first loss function value satisfies a first preset condition; in response to The first loss function value does not satisfy the first preset condition, adjust the parameter values of the image segmentation neural network based on the first loss function value, and then iteratively perform the following operations until the first loss function value Satisfying the first preset condition: using the second sub-network in the image segmentation neural network to obtain the target area where at least one portrait in the training image is located, and using the third sub-network in the image segmentation neural network to obtain the target area in the target area The location information of the portrait that needs to be segmented.
  • the first image is input into the trained image segmentation neural network for image segmentation
  • the step of obtaining the image of the portrait area includes: acquiring the first image; using the trained image segmentation neural network to obtain the image in the first image.
  • the acquisition of the target area where at least one portrait in the first image is located includes: dividing the first image into a plurality of segments according to the target direction, and the target direction includes at least a vertical direction or a horizontal direction ; Respectively for any section in the plurality of sections, determine the ROI corresponding to the at least one portrait in any section, the ROI is determined by the first boundary and the second boundary, and the first boundary The directions of the first boundary and the second boundary are perpendicular to the target direction; based on the ROIs in the plurality of sections, the target area where at least one portrait is located is determined.
  • the acquiring the position information of the portrait in the target area that needs to be segmented includes: for any segment in the plurality of segments, performing feature extraction, generating a feature vector of a fixed height based on a feature extraction result; and acquiring position information of a portrait to be segmented in any segment based on the feature vector of a fixed height.
  • the step of aligning and fusing the image of the portrait area with the second image to obtain the fused image includes: calculating an offset of the image of the portrait area relative to the second image by using a pixel alignment algorithm, Perform pixel replacement of the portrait area image on corresponding pixels of the second image to obtain the fused image.
  • This application uses two focus shots, one with portrait focus and one foreground focus, and the AI portrait segmentation of the face focus image and fusion to the perspective focus image, so as to achieve clear portraits and backgrounds within the depth of the panorama.
  • a storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the depth-of-view of the present application. Steps in an image synthesis method.
  • a smart phone is also provided, as shown in FIG. 2 , which includes at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, which may also include a communication interface (Communications Interface) 23 and bus 24.
  • processor processor
  • the display screen 21 is configured to display the preset user guidance interface in the initial setting mode.
  • the communication interface 23 can transmit information.
  • the processor 20 can invoke logic instructions in the memory 22 to execute the methods in the above-mentioned embodiments.
  • logic instructions in the memory 22 may be implemented in the form of software functional units and when sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the memory 22 can be configured to store software programs and computer-executable programs, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 20 runs software programs, instructions or modules stored in the memory 22 to execute functional applications and data processing, ie to implement the methods in the above-mentioned embodiments.
  • the memory 22 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required by a function; the data storage area may store data created according to the use of the terminal device, and the like.
  • the memory 22 may include a high-speed random access memory, and may also include a non-volatile memory.
  • It includes a processor, adapted to implement various instructions; and a storage medium, adapted to store a plurality of instructions, and the instructions are adapted to be loaded by the processor and execute the steps in the full-depth image synthesis method described in this application.
  • a processor is included, adapted to realize various instructions; and a storage medium, adapted to store a plurality of instructions, and the instructions are adapted to be loaded by the processor and execute the steps in the full-depth image synthesis method described in the present application.
  • this application proposes a panoramic depth image synthesis method.
  • the focus is locked to the face and the farthest lens respectively, and the first image (face focus image) and the second image (distant view) are acquired correspondingly. focus image); performing portrait segmentation processing on the first image to obtain a portrait region image; aligning and fusing the portrait region image with the second image to obtain a fused image.
  • This application uses two focus shots, one with portrait focus and one foreground focus, and the AI portrait segmentation of the face focus image and fusion to the perspective focus image, so as to achieve clear portraits and backgrounds within the depth of the panorama.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种全景深图像合成方法、存储介质及智能手机,其中,所述全景深图像合成方法包括步骤:在拍照时,分别对焦锁定至人脸和镜头最远处,对应获取第一图像(人脸对焦图像)和第二图像(远景对焦图像);对所述第一图像进行人像分割处理,获得人像区域图像;将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像。本申请可以实现全景深范围内人像和背景均清晰。

Description

一种全景深图像合成方法、存储介质及智能手机
本申请要求申请日为2021年08月19日、申请号为CN202110953450.2、发明名称为“一种全景深图像合成方法、存储介质及智能手机”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像合成技术领域,特别涉及一种全景深图像合成方法、存储介质及智能手机。
背景技术
目前智能手机设备的前置自拍摄像头均搭载了自动对焦功能,这可以有效提升拍照时人脸的清晰度。但是由于前置摄像头的景深范围有限,当对焦点在人脸时,背景画面将处在有效景深之外,从而导致背景会出现虚化的现象。
因此,现有技术还有待于改进和发展。
技术问题
本申请要解决的技术问题在于,针对现有技术的不足,提供一种全景深图像合成方法、存储介质及智能手机,旨在解决现有前置摄像头的对焦点在人脸时,背景画面将处在有效景深之外,从而导致背景会出现虚化的问题。
技术解决方案
为了解决上述技术问题,本申请所采用的技术方案如下:
一种全景深图像合成方法,其中,包括步骤:
在拍照时,分别对焦锁定至人脸和镜头最远处,对应获取第一图像和第二图像;
对所述第一图像进行人像分割处理,获得人像区域图像;
将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像。
所述的全景深图像合成方法,其中,所述第一图像为人脸对焦图像,获取所述人脸对焦图像的步骤包括:
启动摄像头并检测摄像头预览镜头下是否含有人脸数据;
若检测到所述摄像头预览镜头下含有人脸数据,则启动人脸对焦模式进行拍照,获得人脸对焦图像。
所述的全景深图像合成方法,其中,所述第二图像为远景对焦图像,获取所述远景对焦图像的步骤包括:
在启动人脸对焦模式进行拍照获得人脸对焦图像后,再启动远景对焦模式进行拍照,获得远景对焦图像。
所述的全景深图像合成方法,其中,还包括步骤:
若检测到所述摄像头预览镜头下不含有人脸数据,则启动近景对焦模式,获得近景图像。
所述的全景深图像合成方法,其中,对所述第一图像进行人像分割处理,获得人像区域图像的步骤包括:
采用标注过的包含人脸的图像数据作为训练样本对图像分割神经网络进行训练,得到训练后图像分割神经网络;
将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像。
所述的全景深图像合成方法,其中,采用标注过的包含人脸的图像数据作为训练图像对图像分割神经网络进行训练,得到训练后图像分割神经网络的步骤包括:
利用图像分割网络获取训练图像中的至少一个人像所在的目标区域,并获取所述目标区域中的需要分割的人像的位置信息,其中,所述训练图像中标注有需要分割的人像的位置标注信息;
基于所述需要分割的人像的位置信息和所述需要分割的人像的位置标注信息,对所述图像分割神经网络进行训练,得到训练后图像分割神经网络。
所述的全景深图像合成方法,其中,将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像的步骤包括:
获取所述第一图像;
利用训练后图像分割神经网络获取第一图像中的至少一个人像所在的目标区域,并获取所述目标区域中的需要分割的人像的位置信息,基于所述人像的位置信息获得所述人像区域图像。
所述的全景深图像合成方法,其中,将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像的步骤包括:
采用像素对齐算法,计算所述人像区域图像相对所述第二图像的偏移,在所述第二图像的对应像素上进行人像区域图像的像素替换,得到所述融合图像。
一种存储介质,其中,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现本申请全景深图像合成方法中的步骤。
一种智能手机,其中,包括处理器,适于实现各指令;以及存储介质,适于存储多条指令,所述指令适于由处理器加载并执行本申请全景深图像合成方法中的步骤。
有益效果
本申请提出了一种全景深图像合成方法,在拍照时,分别对焦锁定至人脸和镜头最远处,对应获取第一图像(人脸对焦图像)和第二图像(远景对焦图像);对所述第一图像进行人像分割处理,获得人像区域图像;将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像。本申请通过两次对焦拍照,分别用人像对焦和远景对焦拍照,将人脸对焦图像进行AI人像分割并融合至远景对焦图像上,从而实现全景深范围内人像和背景均清晰。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一种全景深图像合成方法的流程图。
图2为本申请一种智能手机的原理框图。
本申请的实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供一种全景深图像合成方法、存储介质及智能手机,为使本申请的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本申请进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本申请所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
请参阅图1,图1为本申请提供的一种全景深图像合成方法的较佳实施例流程图,如图所示,其包括步骤:
S10、在拍照时,分别对焦锁定至人脸和镜头最远处,对应获取第一图像和第二图像;
S20、对所述第一图像进行人像分割处理,获得人像区域图像;
S30、将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像。
本实施例在拍照过程中,通过两次对焦拍照,分别用人脸对焦和远景对焦拍照对应得到第一图像(人脸对焦图像)和第二图像(远景对焦图像),然后将人脸对焦图像进行AI人像分割并融合至远景对焦图像上,得到融合图像,从而实现全景深范围内人像和背景均清晰。
在一些实施方式中,所述第一图像为人脸对焦图像,获取所述人脸对焦图像的步骤包括:启动摄像头并检测摄像头预览镜头下是否含有人脸数据;若检测到所述摄像头预览镜头下含有人脸数据,则启动人脸对焦模式进行拍照,获得人脸对焦图像;若检测到所述摄像头预览镜头下不含有人脸数据,则启动近景对焦模式,获得近景图像。
在一些实施方式中,所述第二图像为远景对焦图像,获取所述远景对焦图像的步骤包括:在启动人脸对焦模式进行拍照获得人脸对焦图像后,再启动远景对焦模式进行拍照,获得远景对焦图像。本实施例中,所述远景对焦模式通常是指对焦推进至镜头的最远处,待对焦收敛后进行拍照。
在一些实施方式中,所述步骤S20、对所述第一图像进行人像分割处理,获得人像区域图像具体包括:
S21、采用标注过的包含人脸的图像数据作为训练样本对图像分割神经网络进行训练,得到训练后图像分割神经网络;
S22、将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像。
本实施例提出一个端到端可训练的神经网络来实现图像的分割处理,将这个端到端可训练的神经网络称为图像分割神经网络。采用本申请实施例的图像分割神经网络对第一图像(人脸对焦图像)中的人像进行分割处理,主要包括以下两步:1)将第一图像划分成多个区段,预测每个区段内覆盖所有人像的ROI,这里,ROI是指在需要处理的图像中以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域,以便进行后续2)中的处理;2)提取每个ROI内的特征,对每个人像的位置进行精确定位。本申请实施例的图像分割神经网络可以应用但不局限于以下场景:
场景一:用户将采集到的包含人像的第一图像通过网络传输到云端,由云端采用本申请实施例的图像分割神经网络对图像进行分割处理。
场景二:用户将采集到的包含人像的第一图像输入到本地计算机设备中,由计算机设备采用本申请实施例的图像分割神经网络对图像进行分割处理。
在一些实施方式中,采用标注过的包含人脸的图像数据作为训练图像对图像分割神经网络进行训练,得到训练后图像分割神经网络的步骤包括:
利用图像分割网络获取训练图像中的至少一个人像所在的目标区域,并获取所述目标区域中的需要分割的人像的位置信息,其中,所述训练图像中标注有需要分割的人像的位置标注信息;基于所述需要分割的人像的位置信息和所述需要分割的人像的位置标注信息,对所述图像分割神经网络进行训练,得到训练后图像分割神经网络。
具体来讲,训练图像的数量越多,图像分割神经网络的训练结果越好,另一方面,训练图像的数量越多,需要消耗的计算机资源越多。实际应用中,可以准备数百张或更多的包含有人脸的图像作为训练图像,同时,需要获取训练图像中需要分割的人像的位置标注信息,例如:将需要分割的人像所对应的像素用任意方式表示出来,例如将不同的人像用不同的颜色标识出来,人像的位置标注信息可以由能识别出人像的人通过图形编辑工具来标注出。进一步,需要将位置标注信息转化成需要的格式,从而后续可以通过该位置标注信息来获得标注的人像在训练图像中的位置,这里,将位置标注信息转化成需要的格式包括但不限于热度图、坐标点等。
在本实施例中,所述图像分割神经网络至少包括第一子网络、第二子网络和第三子网络,利用所述第一子网络获取所述训练图像的特征图,利用所述第二子网络对所述训练图像的特征图进行处理,得到训练图像中的至少一个人像所在的目标区域,利用所述第三子网络获取所述目标区域中的需要分割的人像的位置信息。
本实施例中,所述第一子网络的结构不做限定,以分割OCT图像中的视网膜神经层,使用VGG16卷积神经网络为例,将训练图像通过网络的conv1层至conv5层进行处理,得到W×H×C的特征图,其中,W×H为特征图的空间尺寸,C为特征图的通道数。
本实施例中,通过所述第二子网络将所述样本图像按照目标方向划分成多个区段,所述目标方向至少包括垂直方向或水平方向;分别针对所述多个区段中的任一区段,在所述任一区段中确定所述至少一个人像对应的ROI,所述ROI通过第一边界和第二边界确定,所述第一边界和第二边界的方向垂直于所述目标方向;基于所述多个区段中的ROI,确定至少一个人像所在的目标区域。这里,多个区段可以是垂直方向排布的等宽区段,或者水平方向排布的等宽区段;通过第一子网络预测图像的每个等宽区段中覆盖全部人像的区域,作为ROI。这里,预测ROI的方式包括但不局限于回归预测热度图、回归预测坐标、滑窗预测。
本实施例中,通过所述第三子网络分别针对所述多个区段中的任一区段,对所述任一区段中的ROI进行特征提取,基于特征提取结果生成固定高度的特征向量;基于所述固定高度的特征向量,获取所述任一区段中的需要分割的人像的位置信息。这里,通过ROIAlign层或ROI Pooling层从特征图中提取ROI区域的特征,将特征映射到固定高度的特征向量,从而预测每个ROI区域中人像的精确位置,这里,预测人像的精确位置的方式包括但不局限于回归预测热度图、回归预测坐标、滑窗预测。其中,所述目标区域中的需要分割的人像的个数为一个或多个。若ROI中有多个人像,则对每个人像的精确位置分别进行预测。
在一些实施方式中,基于所述需要分割的人像的位置信息和所述需要分割的人像的位置标注信息,对所述图像分割神经网络进行训练,得到训练后图像分割神经网络。
具体来讲,将上述步骤中预测的人像的位置信息输入到损失层,损失层可以根据预测的人像的位置信息来调整图像分割神经网络的参数值,从而对所述图像分割神经网络的参数值进行训练。具体地,基于所述需要分割的人像的位置信息和所述需要分割的人像的位置标注信息,获取第一损失函数值;确定所述第一损失函数值是否满足第一预设条件;响应于所述第一损失函数值不满足第一预设条件,基于所述第一损失函数值对所述图像分割神经网络的参数值进行调整,然后迭代执行如下操作,直至所述第一损失函数值满足第一预设条件:利用图像分割神经网络中的第二子网络获取训练图像中的至少一个人像所在的目标区域,利用所述图像分割神经网络中的第三子网络获取所述目标区域中的需要分割的人像的位置信息。
在一些实施方式中,将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像的步骤包括:获取所述第一图像;利用训练后图像分割神经网络获取第一图像中的至少一个人像所在的目标区域,并获取所述目标区域中的需要分割的人像的位置信息,基于所述人像的位置信息获得所述人像区域图像。
本实施例中,所述获取第一图像中的至少一个人像所在的目标区域,包括:将所述第一图像按照目标方向划分成多个区段,所述目标方向至少包括垂直方向或水平方向;分别针对所述多个区段中的任一区段,在所述任一区段中确定所述至少一个人像对应的ROI,所述ROI通过第一边界和第二边界确定,所述第一边界和第二边界的方向垂直于所述目标方向;基于所述多个区段中的ROI,确定至少一个人像所在的目标区域。
本实施例中,所述获取所述目标区域中的需要分割的人像的位置信息,包括:分别针对所述多个区段中的任一区段,对所述任一区段中的ROI进行特征提取,基于特征提取结果生成固定高度的特征向量;基于所述固定高度的特征向量,获取所述任一区段中的需要分割的人像的位置信息。
在一些实施方式中,将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像的步骤包括:采用像素对齐算法,计算所述人像区域图像相对所述第二图像的偏移,在所述第二图像的对应像素上进行人像区域图像的像素替换,得到所述融合图像。
本申请通过两次对焦拍照,分别用人像对焦和远景对焦拍照,将人脸对焦图像进行AI人像分割并融合至远景对焦图像上,从而实现全景深范围内人像和背景均清晰。
在一些实施方式中,还提供一种存储介质,其中,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现本申请全景深图像合成方法中的步骤。
在一些实施方式中,还提供一种智能手机,如图2所示,其包括至少一个处理器(processor)20;显示屏21;以及存储器(memory)22,还可以包括通信接口(Communications Interface)23和总线24。其中,处理器20、显示屏21、存储器22和通信接口23可以通过总线24完成相互间的通信。显示屏21设置为显示初始设置模式中预设的用户引导界面。通信接口23可以传输信息。处理器20可以调用存储器22中的逻辑指令,以执行上述实施例中的方法。
此外,上述的存储器22中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器22作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令或模块。处理器20通过运行存储在存储器22中的软件程序、指令或模块,从而执行功能应用以及数据处理,即实现上述实施例中的方法。
存储器22可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器22可以包括高速随机存取存储器,还可以包括非易失性存储器。例如,U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
此外,上述存储介质以及终端设备中的多条指令处理器加载并执行的具体过程在上述方法中已经详细说明,在这里就不再一一陈述。
包括处理器,适于实现各指令;以及存储介质,适于存储多条指令,所述指令适于由处理器加载并执行本申请所述全景深图像合成方法中的步骤。
其中,包括处理器,适于实现各指令;以及存储介质,适于存储多条指令,所述指令适于由处理器加载并执行本申请所述全景深图像合成方法中的步骤。
综上所述,本申请提出了一种全景深图像合成方法,在拍照时,分别对焦锁定至人脸和镜头最远处,对应获取第一图像(人脸对焦图像)和第二图像(远景对焦图像);对所述第一图像进行人像分割处理,获得人像区域图像;将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像。本申请通过两次对焦拍照,分别用人像对焦和远景对焦拍照,将人脸对焦图像进行AI人像分割并融合至远景对焦图像上,从而实现全景深范围内人像和背景均清晰。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种全景深图像合成方法,其中,包括步骤:
    在拍照时,分别对焦锁定至人脸和镜头最远处,对应获取第一图像和第二图像;
    对所述第一图像进行人像分割处理,获得人像区域图像;
    将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像。
  2. 根据权利要求1所述的全景深图像合成方法,其中,所述第一图像为人脸对焦图像,获取所述人脸对焦图像的步骤包括:
    启动摄像头并检测摄像头预览镜头下是否含有人脸数据;
    若检测到所述摄像头预览镜头下含有人脸数据,则启动人脸对焦模式进行拍照,获得人脸对焦图像。
  3. 根据权利要求2所述的全景深图像合成方法,其中,所述第二图像为远景对焦图像,获取所述远景对焦图像的步骤包括:
    在启动人脸对焦模式进行拍照获得人脸对焦图像后,再启动远景对焦模式进行拍照,获得远景对焦图像。
  4. 根据权利要求2所述的全景深图像合成方法,其中,还包括步骤:
    若检测到所述摄像头预览镜头下不含有人脸数据,则启动近景对焦模式,获得近景图像。
  5. 根据权利要求2所述的全景深图像合成方法,其中,对所述第一图像进行人像分割处理,获得人像区域图像的步骤包括:
    采用标注过的包含人脸的图像数据作为训练样本对图像分割神经网络进行训练,得到训练后图像分割神经网络;
    将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像。
  6. 根据权利要求5所述的全景深图像合成方法,其中,所述图像分割神经网络为端到端可训练的神经网络。
  7. 根据权利要求5所述的全景深图像合成方法,其中,所述将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像,包括:
    将所述第一图像通过网络传输到云端,以指示所述云端采用所述图像分割神经网络对所述第一图像进行分割处理,得到人像区域图像;
    接收所述云端返回的所述人像区域图像。
  8. 根据权利要求5所述的全景深图像合成方法,其中,所述将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像,包括:
    将所述第一图像输入到本地计算机设备中,以指示所述计算机设备采用所述图像分割神经网络对所述第一图像进行分割处理,得到人像区域图像;
    接收所述本地计算机设备返回的所述人像区域图像。
  9. 根据权利要求5所述的全景深图像合成方法,其中,采用标注过的包含人脸的图像数据作为训练图像对图像分割神经网络进行训练,得到训练后图像分割神经网络的步骤包括:
    利用图像分割网络获取训练图像中的至少一个人像所在的目标区域,并获取所述目标区域中的需要分割的人像的位置信息,其中,所述训练图像中标注有需要分割的人像的位置标注信息;
    基于所述需要分割的人像的位置信息和所述需要分割的人像的位置标注信息,对所述图像分割神经网络进行训练,得到训练后图像分割神经网络。
  10. 根据权利要求9所述的全景深图像合成方法,其中,所述位置标注信息的格式为热度图。
  11. 根据权利要求9所述的全景深图像合成方法,其中,所述位置标注信息的格式为坐标点。
  12. 根据权利要求9所述的全景深图像合成方法,其中,所述图像分割神经网络至少包括第一子网络、第二子网络和第三子网络;所述利用图像分割网络获取训练图像中的至少一个人像所在的目标区域,包括:
    利用所述第一子网络获取所述训练图像的特征图;
    利用所述第二子网络对所述训练图像的特征图进行处理,得到训练图像中的至少一个人像所在的目标区域;
    所述获取所述目标区域中的需要分割的人像的位置信息,包括:利用所述第三子网络获取所述目标区域中的需要分割的人像的位置信息。
  13. 根据权利要求12所述的全景深图像合成方法,其中,所述利用所述第二子网络对所述训练图像的特征图进行处理,得到训练图像中的至少一个人像所在的目标区域,包括:
    通过所述第二子网络将所述训练图像按照目标方向划分成多个区段,所述目标方向至少包括垂直方向或水平方向;
    分别针对所述多个区段中的任一区段,在所述任一区段中确定所述至少一个人像对应的ROI,所述ROI通过第一边界和第二边界确定,所述第一边界和第二边界的方向垂直于所述目标方向;
    基于所述多个区段中的ROI,确定至少一个人像所在的目标区域。
  14. 根据权利要求13所述的全景深图像合成方法,其中,所述多个区段为垂直方向排布的等宽区段。
  15. 根据权利要求13所述的全景深图像合成方法,其中,所述多个区段为水平方向排布的等宽区段。
  16. 根据权利要求9所述的全景深图像合成方法,其中,所述基于所述需要分割的人像的位置信息和所述需要分割的人像的位置标注信息,对所述图像分割神经网络进行训练,得到训练后图像分割神经网络,包括:
    基于所述需要分割的人像的位置信息和所述需要分割的人像的位置标注信息,获取第一损失函数值;
    确定所述第一损失函数值是否满足第一预设条件;
    响应于所述第一损失函数值不满足第一预设条件,基于所述第一损失函数值对所述图像分割神经网络的参数值进行调整,然后迭代执行如下操作,直至所述第一损失函数值满足第一预设条件,以得到训练后图像分割神经网络:利用图像分割神经网络中的第二子网络获取训练图像中的至少一个人像所在的目标区域,利用所述图像分割神经网络中的第三子网络获取所述目标区域中的需要分割的人像的位置信息。
  17. 根据权利要求5所述的全景深图像合成方法,其中,将第一图像输入所述训练后图像分割神经网络进行图像分割,得到人像区域图像的步骤包括:
    获取所述第一图像;
    利用训练后图像分割神经网络获取第一图像中的至少一个人像所在的目标区域,并获取所述目标区域中的需要分割的人像的位置信息,基于所述人像的位置信息获得所述人像区域图像。
  18. 根据权利要求1所述的全景深图像合成方法,其中,将所述人像区域图像与所述第二图像进行对齐融合,得到融合图像的步骤包括:
    采用像素对齐算法,计算所述人像区域图像相对所述第二图像的偏移,在所述第二图像的对应像素上进行人像区域图像的像素替换,得到所述融合图像。
  19. 一种存储介质,其中,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1-18任意一项全景深图像合成方法中的步骤。
  20. 一种智能手机,其中,包括处理器,适于实现各指令;以及存储介质,适于存储多条指令,所述指令适于由处理器加载并执行权利要求1-18任意一项全景深图像合成方法中的步骤。
PCT/CN2022/106869 2021-08-19 2022-07-20 一种全景深图像合成方法、存储介质及智能手机 WO2023020190A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110953450.2 2021-08-19
CN202110953450.2A CN113824877B (zh) 2021-08-19 2021-08-19 一种全景深图像合成方法、存储介质及智能手机

Publications (1)

Publication Number Publication Date
WO2023020190A1 true WO2023020190A1 (zh) 2023-02-23

Family

ID=78913288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106869 WO2023020190A1 (zh) 2021-08-19 2022-07-20 一种全景深图像合成方法、存储介质及智能手机

Country Status (2)

Country Link
CN (1) CN113824877B (zh)
WO (1) WO2023020190A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113824877B (zh) * 2021-08-19 2023-04-28 惠州Tcl云创科技有限公司 一种全景深图像合成方法、存储介质及智能手机

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120320238A1 (en) * 2011-06-14 2012-12-20 AmTRAN TECHNOLOGY Co. Ltd Image processing system, camera system and image capture and synthesis method thereof
CN104333703A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 使用双摄像头拍照的方法和终端
CN107392933A (zh) * 2017-07-12 2017-11-24 维沃移动通信有限公司 一种图像分割的方法及移动终端
CN108171743A (zh) * 2017-12-28 2018-06-15 努比亚技术有限公司 拍摄图像的方法、设备及计算机可存储介质
CN112085686A (zh) * 2020-08-21 2020-12-15 北京迈格威科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112258528A (zh) * 2020-11-02 2021-01-22 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备
CN112532881A (zh) * 2020-11-26 2021-03-19 维沃移动通信有限公司 图像处理方法、装置和电子设备
CN113824877A (zh) * 2021-08-19 2021-12-21 惠州Tcl云创科技有限公司 一种全景深图像合成方法、存储介质及智能手机

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120320238A1 (en) * 2011-06-14 2012-12-20 AmTRAN TECHNOLOGY Co. Ltd Image processing system, camera system and image capture and synthesis method thereof
CN104333703A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 使用双摄像头拍照的方法和终端
CN107392933A (zh) * 2017-07-12 2017-11-24 维沃移动通信有限公司 一种图像分割的方法及移动终端
CN108171743A (zh) * 2017-12-28 2018-06-15 努比亚技术有限公司 拍摄图像的方法、设备及计算机可存储介质
CN112085686A (zh) * 2020-08-21 2020-12-15 北京迈格威科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112258528A (zh) * 2020-11-02 2021-01-22 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备
CN112532881A (zh) * 2020-11-26 2021-03-19 维沃移动通信有限公司 图像处理方法、装置和电子设备
CN113824877A (zh) * 2021-08-19 2021-12-21 惠州Tcl云创科技有限公司 一种全景深图像合成方法、存储介质及智能手机

Also Published As

Publication number Publication date
CN113824877B (zh) 2023-04-28
CN113824877A (zh) 2021-12-21

Similar Documents

Publication Publication Date Title
US11756223B2 (en) Depth-aware photo editing
US10477005B2 (en) Portable electronic devices with integrated image/video compositing
US10609284B2 (en) Controlling generation of hyperlapse from wide-angled, panoramic videos
US10425638B2 (en) Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
CN104680501B (zh) 图像拼接的方法及装置
WO2016074620A1 (en) Parallax tolerant video stitching with spatial-temporal localized warping and seam finding
JP2016538783A (ja) モバイル映像データを用いて長尺文書の合成画像を生成するためのシステムおよび方法
US20160323505A1 (en) Photographing processing method, device and computer storage medium
CN104660909A (zh) 图像获取方法、图像获取装置和终端
CN110611768B (zh) 多重曝光摄影方法及装置
CN112311965A (zh) 虚拟拍摄方法、装置、系统及存储介质
WO2011014421A2 (en) Methods, systems, and computer-readable storage media for generating stereoscopic content via depth map creation
US20160191898A1 (en) Image Processing Method and Electronic Device
US11393076B2 (en) Blurring panoramic image blurring method, terminal and computer readable storage medium
CN105657394A (zh) 基于双摄像头的拍摄方法、拍摄装置及移动终端
KR101549929B1 (ko) 깊이 지도를 생성하는 방법 및 장치
WO2023020190A1 (zh) 一种全景深图像合成方法、存储介质及智能手机
WO2018116322A1 (en) System and method for generating pan shots from videos
CN108053376A (zh) 一种语义分割信息指导深度学习鱼眼图像校正方法
CN110796690B (zh) 图像匹配方法和图像匹配装置
CN105467741A (zh) 一种全景拍照方法及终端
WO2015141185A1 (ja) 撮像制御装置、撮像制御方法および記録媒体
WO2019000427A1 (zh) 一种图像处理方法、装置及电子设备
EP3952286A1 (en) Photographing method, apparatus and system, and computer readable storage medium

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2022857503

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022857503

Country of ref document: EP

Effective date: 20240319