TW202422478A - Image scene construction method, apparatus, electronic equipment and storage medium - Google Patents

Image scene construction method, apparatus, electronic equipment and storage medium Download PDF

Info

Publication number
TW202422478A
TW202422478A TW112102524A TW112102524A TW202422478A TW 202422478 A TW202422478 A TW 202422478A TW 112102524 A TW112102524 A TW 112102524A TW 112102524 A TW112102524 A TW 112102524A TW 202422478 A TW202422478 A TW 202422478A
Authority
TW
Taiwan
Prior art keywords
scene
target
image
shooting device
preset radius
Prior art date
Application number
TW112102524A
Other languages
Chinese (zh)
Other versions
TWI833560B (en
Inventor
雲昊
許國軍
Original Assignee
大陸商立訊精密科技(南京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商立訊精密科技(南京)有限公司 filed Critical 大陸商立訊精密科技(南京)有限公司
Application granted granted Critical
Publication of TWI833560B publication Critical patent/TWI833560B/en
Publication of TW202422478A publication Critical patent/TW202422478A/en

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides an image scene construction method, apparatus, electronic equipment and storage medium. The present invention specifically comprises: obtaining a target positioning information of a target equipment; based on the target positioning information, determining a scene shooting equipment in a default radius taking the target positioning information as a center; based on a position and an angle of view information of the scene shooting equipment, determining a target image; and based on the target image, constructing the image scene . In the technical means of the present invention embodiment, the equipment that can be used for image acquisition for constructing AR scenes can be flexibly determined in real time with the different positions of the target equipment, which improves the flexibility and practicality of the image scene construction. At the same time, it is able to precisely construct the image scene according to the different positions and angles of view, which results in that the generated third-person AR scene reproduces the real world relatively well and improves the accuracy of the image scene construction.

Description

一種圖像場景構造方法、裝置、電子設備和儲存介質Image scene construction method, device, electronic equipment and storage medium

本發明係關於影像處理技術領域,尤其係關於一種圖像場景構造方法、裝置、電子設備和儲存介質。The present invention relates to the field of image processing technology, and more particularly to an image scene construction method, device, electronic equipment and storage medium.

隨著虛擬實境(Virtual Reality,VR)和擴增實境(Augmented Reality,AR)等一系列技術的發展,越來越多的行業開始採用這些多媒體技術進行三維建模和智慧交互。尤其是對於圖像場景的構建,廣泛應用於交通、遊戲等行業或領域,為廣大使用者帶來了良好的使用體驗。With the development of a series of technologies such as Virtual Reality (VR) and Augmented Reality (AR), more and more industries have begun to adopt these multimedia technologies for three-dimensional modeling and intelligent interaction. Especially for the construction of image scenes, it is widely used in industries or fields such as transportation and games, bringing a good user experience to the majority of users.

當前,對於第三視角的三維圖像場景的構建方法一般採用錄製設備和定位設備對使用者所在的空間範圍進行圖像或視頻的錄製,並通過後期軟體進行場景合成,為使用者提供第三人稱視角的場景顯示。但是,這種方法僅適用於室內便於佈置錄製設備的地方,例如在室內玩AR遊戲的情況。因此,該方法的使用較為局限、靈活性較差。At present, the method for constructing a three-dimensional image scene from a third-person perspective generally uses a recording device and a positioning device to record images or videos of the space where the user is located, and synthesizes the scene through post-production software to provide the user with a third-person perspective scene display. However, this method is only applicable to indoor places where recording equipment is convenient to be arranged, such as when playing AR games indoors. Therefore, the use of this method is relatively limited and less flexible.

本發明提供了一種圖像場景構造方法、裝置、電子設備和儲存介質,以提高第三人稱視角的圖像場景構造的靈活性。The present invention provides an image scene construction method, apparatus, electronic device and storage medium to improve the flexibility of image scene construction from a third-person perspective.

根據本發明的一態樣,提供了一種圖像場景構造方法,所述方法包含: 獲取目標設備的目標定位資訊; 根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 根據場景拍攝設備的位置和視角資訊,確定靶心圖表像; 根據靶心圖表像,對圖像場景進行構造。 According to one aspect of the present invention, a method for constructing an image scene is provided, the method comprising: Obtaining target positioning information of a target device; Based on the target positioning information, determining a scene shooting device within a preset radius centered on the target positioning information; Based on the position and viewing angle information of the scene shooting device, determining a bull's eye image; Constructing an image scene based on the bull's eye image.

根據本發明的另一態樣,提供了一種圖像場景構造裝置,包含: 定位資訊獲取模組,用於獲取目標設備的目標定位資訊; 拍攝設備確定模組,用於根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 靶心圖表像確定模組,用於根據場景拍攝設備的位置和視角資訊,確定靶心圖表像; 圖像場景構造模組,用於根據靶心圖表像,對圖像場景進行構造。 According to another aspect of the present invention, an image scene construction device is provided, comprising: a positioning information acquisition module for acquiring target positioning information of a target device; a shooting device determination module for determining a scene shooting device within a preset radius centered on the target positioning information according to the target positioning information; a bull's eye image determination module for determining a bull's eye image according to the position and viewing angle information of the scene shooting device; an image scene construction module for constructing an image scene according to the bull's eye image.

根據本發明的另一態樣,提供了一種電子設備,所述電子設備包含: 至少一個處理器;以及 與所述至少一個處理器通訊連接的記憶體;其中, 所述記憶體儲存有可被所述至少一個處理器執行的電腦程式,所述電腦程式被所述至少一個處理器執行,以使所述至少一個處理器能夠執行本發明任一實施例所述的圖像場景構造方法。 According to another aspect of the present invention, an electronic device is provided, the electronic device comprising: at least one processor; and a memory connected to the at least one processor in communication; wherein, the memory stores a computer program executable by the at least one processor, the computer program being executed by the at least one processor so that the at least one processor can execute the image scene construction method described in any embodiment of the present invention.

根據本發明的另一態樣,提供一種電腦可讀儲存介質,所述電腦可讀儲存介質儲存有電腦指令,所述電腦指令用於使處理器執行時實現本發明任一實施例所述的圖像場景構造方法。According to another aspect of the present invention, a computer-readable storage medium is provided, wherein the computer-readable storage medium stores computer instructions, and the computer instructions are used to enable a processor to implement the image scene construction method described in any embodiment of the present invention when executed.

本發明實施例的技術手段中,根據目標設備的目標定位資訊,確定以其為中心的預設半徑範圍內的場景拍攝設備,這樣做可以跟隨目標設備的位置不同,即時的、靈活的確定可以用於構造AR場景的圖像獲取的設備,提高了圖像場景構造的靈活性和實用性;同時,根據所述場景拍攝設備的位置和視角資訊,確定靶心圖表像以構造圖像場景,能夠根據位置的不同和視角的不同,對圖像場景進行精確的構造,使得生成的第三人稱AR場景較為還原現實世界,提高了圖像場景構造的準確性。In the technical means of the embodiment of the present invention, according to the target positioning information of the target device, the scene shooting device within a preset radius centered on the target device is determined. In this way, the image acquisition device that can be used to construct the AR scene can be determined instantly and flexibly according to the different positions of the target device, thereby improving the flexibility and practicality of the image scene construction; at the same time, according to the position and viewing angle information of the scene shooting device, the bull's eye image is determined to construct the image scene, and the image scene can be accurately constructed according to different positions and different viewing angles, so that the generated third-person AR scene is more likely to restore the real world, thereby improving the accuracy of the image scene construction.

應當理解,本部分所描述的內容並非旨在標識本發明的實施例的關鍵或重要特徵,也不用於限制本發明的範圍。本發明的其它特徵將通過以下的說明書而變得容易理解。It should be understood that the content described in this section is not intended to identify the key or important features of the embodiments of the present invention, nor is it intended to limit the scope of the present invention. Other features of the present invention will become easily understood through the following description.

為了使所屬技術領域中具有通常知識者更好地理解本發明方案,下面將結合本發明實施例中的圖式,對本發明實施例中的技術手段進行清楚、完整地描述,顯然,所描述的實施例僅僅是本發明一部分的實施例,而不是全部的實施例。基於本發明中的實施例,所屬技術領域中具有通常知識者在沒有做出創造性勞動前提下所獲得的所有其他實施例,都應當屬於本發明保護的範圍。In order to enable a person with ordinary knowledge in the relevant technical field to better understand the scheme of the present invention, the following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical means in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person with ordinary knowledge in the relevant technical field without creative labor should fall within the scope of protection of the present invention.

需要說明的是,本發明的說明書和申請專利範圍及上述圖式中的術語「第一」、「第二」等是用於區別類似的物件,而不必用於描述特定的順序或先後次序。應該理解這樣使用的資料在適當情況下可以互換,以便這裡描述的本發明的實施例能夠以除了在這裡圖示或描述的那些以外的順序實施。此外,術語「包含」和「具有」及他們的任何變形,意圖在於覆蓋不排他的包含,例如,包含一系列步驟或單元的過程、方法、系統、產品或設備不必限於清楚地列出的那些步驟或單元,而是可包含沒有清楚地列出的或對於這些過程、方法、產品或設備固有的其它步驟或單元。 [實施例一] It should be noted that the terms "first", "second", etc. in the specification and patent application of the present invention and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchanged where appropriate, so that the embodiments of the present invention described herein can be implemented in an order other than those illustrated or described herein. In addition, the terms "including" and "having" and any of their variations are intended to cover non-exclusive inclusions, for example, a process, method, system, product or apparatus containing a series of steps or units is not necessarily limited to those steps or units clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products or apparatus. [Example 1]

圖1為本發明實施例一提供了一種圖像場景構造方法的流程圖,本實施例可適用於在現實環境中構建第三人稱的擴增現實畫面的情況,該方法可以由圖像場景構造裝置來執行,該圖像場景構造裝置可以採用硬體及/或軟體的形式實現,該圖像場景構造裝置可配置於電子設備中。如圖1所示,該方法包含: S110、獲取目標設備的目標定位資訊。 FIG1 is a flowchart of an image scene construction method according to the first embodiment of the present invention. The present embodiment can be applied to the situation of constructing a third-person augmented reality picture in a real environment. The method can be executed by an image scene construction device. The image scene construction device can be implemented in the form of hardware and/or software. The image scene construction device can be configured in an electronic device. As shown in FIG1, the method includes: S110, obtaining target positioning information of the target device.

其中,目標設備可以是亟待進行第三人稱三維場景展示的設備,例如AR(Augmented Reality,擴增現實)眼鏡等設備。目標定位資訊可以是目標設備在既定空間中的位置資料,也可以是根據導航衛星進行定位的具體地理位置資訊。例如在戶外,目標設備可以通過內置的導航晶片與導航衛星進行通訊,並通過後台伺服器進行目標設備的座標資料的獲取,即目標定位資訊的獲取。The target device may be a device that is in urgent need of displaying a third-person three-dimensional scene, such as AR (Augmented Reality) glasses. The target positioning information may be the location data of the target device in a given space, or may be specific geographic location information based on positioning by a navigation satellite. For example, outdoors, the target device may communicate with a navigation satellite through a built-in navigation chip, and obtain the coordinate data of the target device through a background server, i.e., obtain the target positioning information.

S120、根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備。S120: Determine, based on the target positioning information, a scene shooting device within a preset radius centered on the target positioning information.

其中,場景拍攝設備可以是任何可以進行圖像拍攝或視頻錄製的硬體設備,例如與目標設備在同一既定空間或在地理位置上相處較近距離的拍攝設備(可以同樣為AR眼鏡,也可以是具備相機的其他設備,可以是移動的拍攝設備,也可以是固定的拍攝設備,本發明實施例對此不作限定)。以目標設備(目標定位資訊)為圓心,以預設長度作為半徑距離範圍內進行場景拍攝設備的確定;也即,在目標定位資訊為中心的預設半徑範圍內,確定除了目標設備以外的其他拍攝設備。當然,預設半徑的長度可以由相關技術人員根據具體情況或者人工經驗進行設置,例如可以設置為10公尺或20公尺等。實際上,各個場景拍攝設備中也內置有導航晶片,後台伺服器可以根據前述步驟中確定的目標設備的目標定位資訊,獲取周圍預設半徑範圍內的所有拍攝設備的資訊(可以不止有定位資訊等)。The scene shooting device may be any hardware device that can shoot images or record videos, such as a shooting device that is in the same given space or is geographically close to the target device (it may also be an AR glasses or other devices with a camera, a mobile shooting device or a fixed shooting device, which is not limited in the embodiments of the present invention). The scene shooting device is determined within a radius range with a preset length as the center of the circle and a preset length as the center; that is, within a preset radius range with the target positioning information as the center, other shooting devices other than the target device are determined. Of course, the length of the preset radius can be set by relevant technical personnel according to specific circumstances or manual experience, for example, it can be set to 10 meters or 20 meters, etc. In fact, each scene shooting device also has a built-in navigation chip, and the background server can obtain information of all shooting devices within the preset radius range (which may include more than just positioning information, etc.) based on the target positioning information of the target device determined in the above steps.

S130、根據場景拍攝設備的位置和視角資訊,確定靶心圖表像。S130: Determine a bullseye image according to the position and viewing angle information of the scene shooting device.

在前述步驟中確定場景拍攝設備後,可以獲取這些場景拍攝設備的定位資訊(即位置,或與目標設備之間的相對位置)。視角資訊可以包含場景拍攝設備的相機視角(例如可以是25°到124°之間,根據場景拍攝設備的硬體情況區別而不同);視角資訊還可以包含視角的方向,例如視角中間的正方向。視角資訊內的圖像資訊均可以被場景拍攝設備所獲取,獲取的介質可以是圖像或者視頻。靶心圖表像可以是由場景拍攝設備拍攝的場景圖像或視頻中的一幀,當然需要說明的是,靶心圖表片中可以包含目標設備,這樣才能有助於建立目標設備的第三人稱視角的AR場景。After the scene capture devices are determined in the above steps, the positioning information (i.e., the position, or the relative position with respect to the target device) of these scene capture devices can be obtained. The viewing angle information can include the camera viewing angle of the scene capture device (for example, it can be between 25° and 124°, depending on the hardware conditions of the scene capture device); the viewing angle information can also include the direction of the viewing angle, such as the positive direction in the middle of the viewing angle. The image information in the viewing angle information can all be obtained by the scene capture device, and the obtained medium can be an image or a video. The bull's-eye image may be a scene image or a frame in a video captured by a scene capture device. Of course, it should be noted that the bull's-eye image may include a target device, so as to help establish an AR scene from a third-person perspective of the target device.

可選的,所述預設半徑包含第一預設半徑;相應的,所述根據場景拍攝設備的位置和視角資訊,確定靶心圖表像,可以包含:根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備;當目標拍攝設備的數量符合預設數量閾值時,將目標拍攝設備所拍攝的圖像,作為靶心圖表像。Optionally, the preset radius includes a first preset radius; correspondingly, determining the bull's eye image according to the position and viewing angle information of the scene shooting device may include: filtering out target shooting devices from the first preset radius range according to the position and viewing angle information of the scene shooting device; when the number of target shooting devices meets a preset number threshold, using the image captured by the target shooting device as the bull's eye image.

其中,第一預設半徑可以是用於確定第三人稱AR視角的主要拍攝設備的半徑範圍。例如,第一預設半徑可以設定為10公尺,也就是說,以目標設備為圓心,半徑10公尺的範圍內的所有場景拍攝設備,均可以作為用於為目標設備構建第三人稱視角的AR場景的拍攝設備。The first preset radius may be the radius range of the main shooting device used to determine the third-person AR perspective. For example, the first preset radius may be set to 10 meters, that is, all scene shooting devices within a radius of 10 meters with the target device as the center may be used as shooting devices for constructing an AR scene with a third-person perspective for the target device.

但是,數量較少的場景拍攝設備所拍攝的圖像很有可能無法通過後期的處理構建AR場景圖像。因此,需要預先設定一個數量閾值,當第一預設半徑範圍內的場景拍攝設備超過這個數量閾值,才進行靶心圖表像的獲取,從而為後續構建AR場景圖像提供素材,例如可以設置該數量閾值為5個。還有需要解釋的是,由於目標設備可以是使用者佩戴的AR眼鏡等設備,因此在使用者移動的過程中,目標設備也在移動,所以需要後台伺服器根據目標設備的移動情況(即時定位),確定目標設備周圍第一預設半徑範圍內的場景拍攝設備的數量。However, the images captured by a small number of scene capture devices may not be able to pass the subsequent processing to construct AR scene images. Therefore, it is necessary to set a quantity threshold in advance. When the number of scene capture devices within the first preset radius exceeds this quantity threshold, the bull's eye image is acquired to provide materials for the subsequent construction of AR scene images. For example, the quantity threshold can be set to 5. It is also necessary to explain that since the target device can be a device such as AR glasses worn by the user, the target device is also moving when the user moves, so the background server is required to determine the number of scene capture devices within the first preset radius around the target device according to the movement of the target device (real-time positioning).

可以理解的是,視角中含有目標設備(和使用者)時,對應的圖像才能夠便於在後期合成時構造以目標設備(和使用者)為主體的第三人稱的AR圖像場景。因此,在一種可選實施方式中,所述根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備,可以包含:若場景拍攝設備距離目標設備在第一預設半徑之內,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。It is understandable that when the target device (and user) is included in the perspective, the corresponding image can be convenient for constructing a third-person AR image scene with the target device (and user) as the main body during post-synthesis. Therefore, in an optional implementation, the target shooting device is selected from the first preset radius range according to the position and perspective information of the scene shooting device, which may include: if the scene shooting device is within the first preset radius from the target device, and the target device exists in the perspective information of the scene shooting device, the scene shooting device is determined as the target shooting device.

可想而知,在第一預設半徑範圍內,能夠拍攝到目標設備的場景拍攝設備可以作為目標拍攝設備,獲取靶心圖表像,説明後期構建以目標設備為主體的第三人稱的AR場景圖像。It can be imagined that within the first preset radius, the scene shooting device that can shoot the target device can be used as the target shooting device to obtain the bull's eye image, which helps to construct a third-person AR scene image with the target device as the main body in the later stage.

在另一種可選實施方式中,所述根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備,可以包含:若場景拍攝設備距離目標設備為第一預設半徑,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。In another optional implementation, the filtering out the target capturing device from a first preset radius range according to the position and viewing angle information of the scene capturing device may include: if the scene capturing device is at a first preset radius away from the target device and the target device exists in the viewing angle information of the scene capturing device, the scene capturing device is determined as the target capturing device.

可以理解的是,在實際情況中,由於各個場景拍攝設備的視角相近(幾乎都是廣角),不同的場景拍攝設備距離目標設備和佩戴目標設備的使用者的遠近不同,會造成使用者在不同設備中成像大小不同(由於物理上的透視關係原理),導致後期進行圖像合成和場景構造時,由於目標主體(即佩戴目標設備的使用者)在不同圖像中的成像大小不同,給同一AR場景的構造造成了計算上的負擔,計算量大且容易出錯,更多的計算資源被消耗在恢復目標主體的形象上。It is understandable that, in actual situations, since the viewing angles of various scene shooting devices are similar (almost all are wide-angle), different scene shooting devices are at different distances from the target device and the user wearing the target device, which will cause the user's image size to be different in different devices (due to the physical perspective relationship principle). As a result, during the subsequent image synthesis and scene construction, the target subject (i.e., the user wearing the target device) has a different image size in different images, which creates a computational burden on the construction of the same AR scene. The computation is large and prone to errors, and more computing resources are consumed in restoring the image of the target subject.

因此,當場景拍攝設備距離目標設備的長度相同,例如均距離目標設備為第一預設半徑,並且,場景拍攝設備的視角資訊中存在目標設備(和使用者)時,對於後期的圖像合成和場景構造是十分有利的,可以進一步的減小計算量,提高AR場景構造的效率和準確性。Therefore, when the scene shooting device and the target device are at the same distance, for example, both are at the first preset radius from the target device, and the target device (and the user) exists in the viewing angle information of the scene shooting device, it is very beneficial for the subsequent image synthesis and scene construction, which can further reduce the amount of calculation and improve the efficiency and accuracy of AR scene construction.

進一步的,所述預設半徑包含第二預設半徑,且第二預設半徑大於第一預設半徑;相應的,在根據場景拍攝設備的位置和視角資訊,確定靶心圖表像之後,還可以包含:若場景拍攝設備與目標設備的距離介於第一預設半徑和第二預設半徑之間,或者,場景拍攝設備的視角資訊中不存在目標設備,則將場景拍攝設備作為輔助拍攝設備。Furthermore, the preset radius includes a second preset radius, and the second preset radius is larger than the first preset radius; accordingly, after determining the bull's eye image according to the position and viewing angle information of the scene capture device, it may also include: if the distance between the scene capture device and the target device is between the first preset radius and the second preset radius, or the target device does not exist in the viewing angle information of the scene capture device, the scene capture device is used as an auxiliary capture device.

需要說明的是,在一種常見的情況中,只使用視角資訊中包含目標設備的場景拍攝設備,對使用者進行拍攝並構造AR場景,可能會導致使用者的背景或前景的物體和環境等缺少一些圖像資訊,使得最終構造的第三人稱的AR場景並不完整。因此,需要對這些環境和物體等圖像資訊進行補充。It should be noted that in a common situation, only using the scene shooting device that contains the target device in the perspective information to shoot the user and construct the AR scene may result in the lack of some image information of the user's background or foreground objects and environment, making the final constructed third-person AR scene incomplete. Therefore, it is necessary to supplement the image information of these environments and objects.

那麼存在兩種情況。其一,用於補充環境和物體等圖像資訊的輔助拍攝設備並不在所述第一預設半徑範圍中;其二,各輔助拍攝設備無論在不在第一預設半徑範圍內,其視角資訊中並不包含目標設備(和使用者)。可以理解的是,滿足上述兩種情況中的至少一種,即可作為輔助拍攝設備,獲取目標設備(和使用者)之外的場景圖像。從而能夠更好的補充第三人稱AR場景中的背景和前景的圖像資訊,使得AR場景的構造更加的精確、完善。當然由於各拍攝設備的圖像獲取有一定的距離限制或數量限制,因此可以設置大於第一預設半徑的第二預設半徑,從而在第二預設半徑範圍內確定可用的輔助拍攝設備是一種可行的實施方式,例如可以設置第二預設半徑為20公尺。Then there are two situations. First, the auxiliary shooting device used to supplement the image information such as the environment and objects is not within the first preset radius; second, regardless of whether the auxiliary shooting device is within the first preset radius, its viewing angle information does not include the target device (and user). It can be understood that if at least one of the above two situations is met, it can be used as an auxiliary shooting device to obtain scene images outside the target device (and user). Thereby, the image information of the background and foreground in the third-person AR scene can be better supplemented, making the construction of the AR scene more accurate and complete. Of course, since the image acquisition of each shooting device has a certain distance limit or quantity limit, a second preset radius larger than the first preset radius can be set, so that the available auxiliary shooting equipment can be determined within the second preset radius. This is a feasible implementation method. For example, the second preset radius can be set to 20 meters.

S140、根據靶心圖表像,對圖像場景進行構造。S140: constructing an image scene according to the bull's eye image.

基於上述各步驟及實施方式所得到的靶心圖表像,通過先前技術中任意一種後期影像處理技術,將平面圖像合稱為以目標設備(和使用者)為中心主體的第三人稱的AR場景。Based on the bull's eye image obtained by the above steps and implementation methods, the two-dimensional image is collectively referred to as a third-person AR scene with the target device (and the user) as the central subject through any post-image processing technology in the prior art.

可選的,所述根據靶心圖表像,對圖像場景進行構造,可以包含:根據靶心圖表像和輔助拍攝設備所拍攝的輔助圖像,對圖像場景進行構造。Optionally, constructing the image scene according to the bull's eye image may include: constructing the image scene according to the bull's eye image and an auxiliary image taken by an auxiliary shooting device.

其中,靶心圖表像可以是包含目標設備的圖像,輔助圖像可以是不包含目標設備的圖像,僅通過包含目標設備的圖像合成AR場景可能會導致圖像資訊不全,失去一些前景或背景。因此,將靶心圖表像和輔助圖像結合,進行圖像場景的構造,則會補全全部的場景資訊,使得構造出的AR場景更為完善。Among them, the bull's eye image can be an image containing the target device, and the auxiliary image can be an image not containing the target device. Synthesizing the AR scene only by using the image containing the target device may result in incomplete image information and loss of some foreground or background. Therefore, combining the bull's eye image and the auxiliary image to construct the image scene will complete all the scene information, making the constructed AR scene more complete.

進一步的,在根據靶心圖表像,對圖像場景進行構造之後,所述方法還可以包含:將圖像場景投射至目標設備進行展示。Furthermore, after constructing the image scene according to the bull's eye image, the method may also include: projecting the image scene to a target device for display.

構造好該圖像場景後,將圖像場景的視覺資訊發送至目標設備進行展示。例如可以將構造好的第三人稱的AR場景回饋至使用者佩戴的AR眼鏡中,使得使用者可以通過自己佩戴的AR眼鏡看到自己的第三人稱視角,從而提升使用者的使用體驗。After the image scene is constructed, the visual information of the image scene is sent to the target device for display. For example, the constructed third-person AR scene can be fed back to the AR glasses worn by the user, so that the user can see his or her third-person perspective through the AR glasses worn by the user, thereby improving the user experience.

本發明實施例的技術手段中,根據目標設備的目標定位資訊,確定以以其為中心的預設半徑範圍內的場景拍攝設備,這樣做可以跟隨目標設備的位置不同,即時的、靈活的確定可以用於構造AR場景的圖像獲取的設備,提高了圖像場景構造的靈活性和實用性;同時,根據所述場景拍攝設備的位置和視角資訊,確定靶心圖表像以構造圖像場景,能夠根據位置的不同和視角的不同,對圖像場景進行精確的構造,使得生成的第三人稱AR場景較為還原現實世界,提高了圖像場景構造的準確性。 [實施例二] In the technical means of the embodiment of the present invention, according to the target positioning information of the target device, the scene shooting device within the preset radius centered on the target device is determined. In this way, the image acquisition device that can be used to construct the AR scene can be determined in real time and flexibly according to the different positions of the target device, thereby improving the flexibility and practicality of the image scene construction; at the same time, according to the position and perspective information of the scene shooting device, the bull's eye image is determined to construct the image scene, and the image scene can be accurately constructed according to different positions and perspectives, so that the generated third-person AR scene is more likely to restore the real world, thereby improving the accuracy of the image scene construction. [Example 2]

圖2為本發明實施例二提供的一種圖像場景構造方法的流程圖,本實施例是在前述各實施方式的基礎上提供的一種理想實施例。如圖2所示,該方法包含: 目標設備(例如使用者佩戴的AR眼鏡或其他設備)通過第三人稱應用程式的人機交互介面開啟第三人稱服務,然後該應用程式通過網路將目標設備的定位資訊上傳到伺服器端。 FIG2 is a flow chart of an image scene construction method provided by Embodiment 2 of the present invention. This embodiment is an ideal embodiment provided on the basis of the aforementioned embodiments. As shown in FIG2, the method comprises: The target device (such as AR glasses or other devices worn by the user) opens the third-person service through the human-computer interaction interface of the third-person application, and then the application uploads the positioning information of the target device to the server through the network.

伺服器搜索資料庫中目標設備位置周圍一定範圍內(AR設備通用相機焦距範圍,例如可以是10公尺)的其他AR設備,若搜索到的數量超過有效閾值(例如5個),則發送第三人稱服務支援請求到符合位置條件(即10公尺的範圍)的設備上,請求這些設備的視場角資訊。The server searches the database for other AR devices within a certain range around the target device location (the general camera focal length range of AR devices, for example, 10 meters). If the number of AR devices found exceeds the effective threshold (for example, 5), a third-person service support request is sent to the devices that meet the location conditions (i.e., the range of 10 meters) to request the field of view information of these devices.

獲取視場角資訊後計算到目標設備距離相同,視場角正方向彼此差別最大(可以理解的是,視場角正方向差別較小的會導致獲取的視角資訊中包含的場景資訊比較重複),並且視角內包含目標設備(如圖2所示)的一組AR設備作為目標拍攝設備。而在有效範圍內的(AR通用相機最大有效拍攝範圍,例如可以是20公尺)其他AR設備則確定為輔助拍攝設備。分別對應發送開啟作為目標拍攝設備和輔助拍攝設備的第三人稱支援服務目標拍攝設備和輔助拍攝設備的差別可以是,目標拍攝設備採集的場景圖片只進行旋轉,剪裁和拼接處理,需要高圖元的照片;輔助拍攝設備採集的用來參考和確認場景物體和細節,需要進行剪裁,縮放和虛擬處理,不需要太高圖元。After obtaining the field of view information, a group of AR devices that are calculated to be at the same distance from the target device, with the largest difference in the positive direction of the field of view (it is understandable that a smaller difference in the positive direction of the field of view will result in relatively repeated scene information contained in the obtained field of view information), and that contain the target device in the field of view (as shown in Figure 2) are regarded as target shooting devices. Other AR devices within the effective range (the maximum effective shooting range of an AR universal camera, for example, can be 20 meters) are determined as auxiliary shooting devices. The third-person support service for the target shooting device and the auxiliary shooting device is respectively sent to enable the target shooting device and the auxiliary shooting device. The difference between the target shooting device and the auxiliary shooting device can be that the scene pictures collected by the target shooting device are only rotated, cropped and stitched, and require high-pixel photos; the auxiliary shooting device collects pictures for reference and confirmation of scene objects and details, which need to be cropped, scaled and virtualized, and do not require too many high-pixel photos.

目標拍攝設備和輔助拍攝設備開啟AR前置攝像機進行10Hz速度連拍,並發送圖片和視場角資訊至伺服器端,伺服器每秒或每幀對視場角是否依然包含目標設備位置進行判斷,然後對所有目標拍攝設備的圖片剪裁,缺少的部分和不清楚的部分通過輔助拍攝設備的圖片進行渲染,形成3D場景圖片。然後,結合目標設備所在位置和使用者形象建模和渲染出以使用者為中心的3D場景圖片。The target shooting device and the auxiliary shooting device open the AR front camera to shoot continuously at 10Hz, and send pictures and field of view information to the server. The server determines whether the field of view still includes the position of the target device every second or frame, and then crops all the pictures of the target shooting device, and renders the missing and unclear parts through the pictures of the auxiliary shooting device to form a 3D scene picture. Then, the location of the target device and the image of the user are combined to model and render a 3D scene picture centered on the user.

伺服器通過網路,即時的將構造的場景圖像發送到服務請求設備端(即目標設備),目標設備將這些圖像投射到AR鏡片上完成第三人稱場景切換,使得使用者可以通過自己佩戴的AR眼鏡看到以自己為中心的三維的第三人稱AR場景。 [實施例三] The server sends the constructed scene images to the service request device (i.e., the target device) in real time through the network. The target device projects these images onto the AR glasses to complete the third-person scene switching, so that the user can see the three-dimensional third-person AR scene centered on himself through the AR glasses he wears. [Implementation Example 3]

圖3為本發明實施例三提供的一種圖像場景構造裝置的結構示意圖。如圖3所示,該裝置300包含: 定位資訊獲取模組310,用於獲取目標設備的目標定位資訊; 拍攝設備確定模組320,用於根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 靶心圖表像確定模組330,用於根據場景拍攝設備的位置和視角資訊,確定靶心圖表像; 圖像場景構造模組340,用於根據靶心圖表像,對圖像場景進行構造。 FIG3 is a schematic diagram of the structure of an image scene construction device provided in the third embodiment of the present invention. As shown in FIG3, the device 300 includes: Positioning information acquisition module 310, used to obtain target positioning information of the target device; Shooting device determination module 320, used to determine the scene shooting device within a preset radius centered on the target positioning information according to the target positioning information; Bull's eye image determination module 330, used to determine the bull's eye image according to the position and viewing angle information of the scene shooting device; Image scene construction module 340, used to construct the image scene according to the bull's eye image.

本發明實施例的技術手段中,根據目標設備的目標定位資訊,確定以其為中心的預設半徑範圍內的場景拍攝設備,這樣做可以跟隨目標設備的位置不同,即時的、靈活的確定可以用於構造AR場景的圖像獲取的設備,提高了圖像場景構造的靈活性和實用性;同時,根據所述場景拍攝設備的位置和視角資訊,確定靶心圖表像以構造圖像場景,能夠根據位置的不同和視角的不同,對圖像場景進行精確的構造,使得生成的第三人稱AR場景較為還原現實世界,提高了圖像場景構造的準確性。In the technical means of the embodiment of the present invention, according to the target positioning information of the target device, the scene shooting device within a preset radius centered on the target device is determined. In this way, the image acquisition device that can be used to construct the AR scene can be determined instantly and flexibly according to the different positions of the target device, thereby improving the flexibility and practicality of the image scene construction; at the same time, according to the position and viewing angle information of the scene shooting device, the bull's eye image is determined to construct the image scene, and the image scene can be accurately constructed according to different positions and different viewing angles, so that the generated third-person AR scene is more likely to restore the real world, thereby improving the accuracy of the image scene construction.

在一種可選實施方式中,所述預設半徑包含第一預設半徑;相應的,所述靶心圖表像確定模組330,可以包含: 目標設備篩選單元,用於根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備; 靶心圖表像確定單元,用於當目標拍攝設備的數量符合預設數量閾值時,將目標拍攝設備所拍攝的圖像,作為靶心圖表像。 In an optional implementation, the preset radius includes a first preset radius; accordingly, the bull's eye image determination module 330 may include: A target device screening unit, used to screen the target shooting device from the first preset radius range according to the position and viewing angle information of the scene shooting device; A bull's eye image determination unit, used to use the image captured by the target shooting device as the bull's eye image when the number of target shooting devices meets the preset number threshold.

在一種可選實施方式中,所述目標設備篩選單元可以具體用於: 若場景拍攝設備距離目標設備在第一預設半徑之內,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。 In an optional implementation, the target device screening unit may be specifically used to: If the scene capture device is within a first preset radius from the target device, and the target device exists in the viewing angle information of the scene capture device, the scene capture device is determined as the target capture device.

在一種可選實施方式中,所述目標設備篩選單元可以具體用於: 若場景拍攝設備距離目標設備為第一預設半徑,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。 In an optional implementation, the target device screening unit may be specifically used to: If the scene capture device is at a first preset radius from the target device, and the target device exists in the viewing angle information of the scene capture device, the scene capture device is determined as the target capture device.

在一種可選實施方式中,所述預設半徑包含第二預設半徑,且第二預設半徑大於第一預設半徑;相應的,所述靶心圖表像確定模組330,還可以用於: 若場景拍攝設備與目標設備的距離介於第一預設半徑和第二預設半徑之間,或者,場景拍攝設備的視角資訊中不存在目標設備,則將場景拍攝設備作為輔助拍攝設備。 In an optional implementation, the preset radius includes a second preset radius, and the second preset radius is greater than the first preset radius; accordingly, the bull's eye image determination module 330 can also be used to: If the distance between the scene shooting device and the target device is between the first preset radius and the second preset radius, or the target device does not exist in the viewing angle information of the scene shooting device, the scene shooting device is used as an auxiliary shooting device.

在一種可選實施方式中,所述圖像場景構造模組340,可以具體用於: 根據靶心圖表像和輔助拍攝設備所拍攝的輔助圖像,對圖像場景進行構造。 In an optional implementation, the image scene construction module 340 can be specifically used to: Construct the image scene according to the bull's eye image and the auxiliary image taken by the auxiliary shooting device.

在一種可選實施方式中,所述裝置300還可以包含 圖像場景展示模組,用於將圖像場景投射至目標設備進行展示。 In an optional implementation, the device 300 may also include an image scene display module for projecting the image scene to a target device for display.

本發明實施例所提供的圖像場景構造裝置可執行本發明任意實施例所提供的圖像場景構造方法,具備執行各圖像場景構造方法相應的功能模組和功效。 [實施例四] The image scene construction device provided in the embodiment of the present invention can execute the image scene construction method provided in any embodiment of the present invention, and has the corresponding functional modules and functions for executing each image scene construction method. [Embodiment 4]

圖4示出了可以用來實施本發明的實施例的電子設備10的結構示意圖。電子設備旨在表示各種形式的數位電腦,諸如,筆記型電腦、台式電腦、工作台、個人數位助理、伺服器、刀鋒伺服器、大型電腦、和其它適合的電腦。電子設備還可以表示各種形式的行動裝置,諸如,個人數位處理、行動電話、智慧型手機、可穿戴設備(如頭盔、眼鏡、手錶等)和其它類似的計算裝置。本說明書所示的部件、其等的連接和關係、以及其等的功能僅僅作為示例,並且不意在限制本說明書中描述的及/或要求的本發明的實現。Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that can be used to implement an embodiment of the present invention. The electronic device is intended to represent various forms of digital computers, such as laptops, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device can also represent various forms of mobile devices, such as personal digital processing, mobile phones, smart phones, wearable devices (such as helmets, glasses, watches, etc.) and other similar computing devices. The components shown in this specification, their connections and relationships, and their functions are only examples and are not intended to limit the implementation of the present invention described and/or required in this specification.

如圖4所示,電子設備10包含至少一個處理器11,以及與至少一個處理器11通訊連接的記憶體,如唯讀記憶體(Read-Only Memory,ROM)12、隨機存取記憶體(Random Access Memory,RAM)13等,其中,記憶體儲存有可被至少一個處理器執行的電腦程式,處理器11可以根據儲存在唯讀記憶體(ROM)12中的電腦程式或者從記憶單元18載入到隨機存取記憶體(RAM)13中的電腦程式,來執行各種適當的動作和處理。在隨機存取記憶體(RAM) 13中,還可儲存電子設備10操作所需的各種程式和資料。處理器11、唯讀記憶體(ROM) 12以及隨機存取記憶體(RAM) 13通過匯流排14彼此相連。輸入/輸出(Input/Output,I/O)介面15也連接至匯流排14。As shown in FIG4 , the electronic device 10 includes at least one processor 11, and a memory connected to the at least one processor 11, such as a read-only memory (ROM) 12, a random access memory (RAM) 13, etc., wherein the memory stores a computer program that can be executed by at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the ROM 12 or the computer program loaded from the memory unit 18 to the RAM 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 can also be stored. The processor 11, the read-only memory (ROM) 12, and the random access memory (RAM) 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.

電子設備10中的多個部件連接至輸入/輸出(I/O)介面15,包含:輸入單元16,例如鍵盤、滑鼠等;輸出單元17,例如各種類型的顯示器、揚聲器等;記憶單元18,例如磁片、光碟等;以及通訊單元19,例如網卡、數據機、無線通訊收發機等。通訊單元19允許電子設備10通過諸如網際網路的電腦網路及/或各種電訊網路與其他設備交換資訊/資料。Multiple components in the electronic device 10 are connected to an input/output (I/O) interface 15, including: an input unit 16, such as a keyboard, a mouse, etc.; an output unit 17, such as various types of displays, speakers, etc.; a memory unit 18, such as a disk, an optical disk, etc.; and a communication unit 19, such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.

處理器11可以是各種具有處理和計算能力的通用及/或專用處理組件。處理器11的一些示例包含但不限於中央處理單元(Central Processing Unit,CPU)、圖形處理單元(Graphics Processing Unit,GPU)、各種專用的人工智慧(Artificial Intelligence,AI)計算晶片、各種運行機器學習模型演算法的處理器、數位訊號處理器(Digital Signal Processor,DSP)、以及任何適當的處理器、控制器、微控制器等。處理器11執行上文所描述的各個方法和處理,例如圖像場景構造方法。The processor 11 may be a variety of general and/or dedicated processing components with processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various processors for running machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller, etc. The processor 11 executes the various methods and processes described above, such as the image scene construction method.

在一些實施例中,圖像場景構造方法可被實現為電腦程式,其被有形地包含於電腦可讀儲存介質,例如記憶單元18。在一些實施例中,電腦程式的部分或者全部可以經由唯讀記憶體(ROM) 12及/或通訊單元19而被載入及/或安裝到電子設備10上。當電腦程式載入到隨機存取記憶體(RAM) 13並由處理器11執行時,可以執行上文描述的圖像場景構造方法的一個或多個步驟。備選地,在其他實施例中,處理器11可以通過其他任何適當的方式(例如,借助於韌體)而被配置為執行圖像場景構造方法。In some embodiments, the image scene construction method can be implemented as a computer program, which is tangibly contained in a computer-readable storage medium, such as a memory unit 18. In some embodiments, part or all of the computer program can be loaded and/or installed on the electronic device 10 via a read-only memory (ROM) 12 and/or a communication unit 19. When the computer program is loaded into a random access memory (RAM) 13 and executed by the processor 11, one or more steps of the image scene construction method described above can be executed. Alternatively, in other embodiments, the processor 11 can be configured to execute the image scene construction method by any other appropriate means (for example, by means of firmware).

本說明書中以上描述的系統及技術的各種實施方式可以在數位電子電路系統、積體電路系統、現場可程式化邏輯閘陣列(Field Programmable Gate Array,FPGA)、特殊應用積體電路(Application Specific Integrated Circuit,ASIC)、專用標準產品(Application Specific Standard Product,ASSP)、單晶片系統(System on Chip,SOC)、複雜可程式化邏輯裝置(Complex Programmable Logic Device,CPLD)、電腦硬體、韌體、軟體、及/或其等之組合中實現。這些各種實施方式可以包含:實施在一個或者多個電腦程式中,該一個或者多個電腦程式可在包含至少一個可程式設計處理器的可程式設計系統上執行及/或解釋,該可程式設計處理器可以是專用或通用可程式設計處理器,可以從儲存系統、至少一個輸入裝置、和至少一個輸出裝置接收資料和指令,並且將資料和指令傳輸至該儲存系統、該至少一個輸入裝置、和該至少一個輸出裝置。Various implementations of the systems and techniques described above in this specification may be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), system on chip (SOCs), complex programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various implementations may include implementation in one or more computer programs that may be executed and/or interpreted on a programmable system comprising at least one programmable processor, which may be a special-purpose or general-purpose programmable processor that may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.

用於實施本發明的方法的電腦程式可以採用一個或多個程式設計語言的任何組合來編寫。這些電腦程式可以提供給通用電腦、專用電腦或其他可程式設計資料處理裝置的處理器,使得電腦程式當由處理器執行時使流程圖及/或方塊圖中所規定的功能/操作被實施。電腦程式可以完全在機器上執行、部分地在機器上執行,作為獨立套裝軟體部分地在機器上執行且部分地在遠端機器上執行或完全在遠端機器或伺服器上執行。Computer programs for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, a special purpose computer or other programmable data processing device, so that when the computer program is executed by the processor, the functions/operations specified in the flow chart and/or block diagram are implemented. The computer program may be executed entirely on the machine, partially on the machine, partially on the machine as a stand-alone package and partially on a remote machine, or entirely on a remote machine or server.

在本發明的上下文中,電腦可讀儲存介質可以是有形的介質,其可以包含或儲存以供指令執行系統、裝置或設備使用或與指令執行系統、裝置或設備結合地使用的電腦程式。電腦可讀儲存介質可以包含但不限於電子的、磁性的、光學的、電磁的、紅外線的、或半導體系統、裝置或設備,或者上述內容的任何合適組合。備選地,電腦可讀儲存介質可以是機器可讀訊號介質。機器可讀儲存介質的更具體示例會包含基於一個或多個線的電氣連接、攜帶式電腦硬碟、硬碟、隨機存取記憶體(RAM)、唯讀記憶體(ROM)、可抹除可程式唯讀記憶體(Erasable Programmable Read-Only Memory,EPROM)或快閃記憶體、光纖、光碟唯讀記憶體(Compact Disc Read-Only Memory,CD-ROM)、光學儲存設備、磁性儲存設備、或上述內容的任何合適組合。In the context of the present invention, a computer-readable storage medium may be a tangible medium that may contain or store a computer program for use by or in conjunction with an instruction execution system, device, or apparatus. A computer-readable storage medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or apparatuses, or any suitable combination of the foregoing. Alternatively, a computer-readable storage medium may be a machine-readable signal medium. More specific examples of machine-readable storage media would include an electrical connection based on one or more wires, a portable computer hard drive, a hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) or flash memory, optical fibers, compact disc read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

為了提供與使用者的交互,可以在電子設備上實施此處描述的系統和技術,該電子設備具有:用於向使用者顯示資訊的顯示裝置(例如,陰極射線管(Cathode Ray Tube,CRT)或者液晶顯示器(Liquid Crystal Display,LCD)監視器);以及鍵盤和指向裝置(例如,滑鼠或者軌跡球),使用者可以通過該鍵盤和該指向裝置來將輸入提供給電子設備。其它種類的裝置還可以用於提供與使用者的交互;例如,提供給使用者的回饋可以是任何形式的感測回饋(例如,視覺回饋、聽覺回饋、或者觸覺回饋);並且可以用任何形式(包含聲輸入、語音輸入或者、觸覺輸入)來接收來自使用者的輸入。To provide interaction with a user, the systems and techniques described herein may be implemented on an electronic device having: a display device (e.g., a cathode ray tube (CRT) or a liquid crystal display (LCD) monitor) for displaying information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) through which the user can provide input to the electronic device. Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form (including acoustic input, voice input, or tactile input).

可以將此處描述的系統和技術實施在包含後台部件的計算系統(例如,作為資料伺服器)、或包含中介軟體部件的計算系統(例如,應用伺服器)、或者包含前端部件的計算系統(例如,具有圖形化使用者介面或者網路流覽器的使用者電腦,使用者可以通過該圖形化使用者介面或者該網路流覽器來與此處描述的系統和技術的實施方式交互)、或者包含這種後台部件、中介軟體部件、或者前端部件的任何組合的計算系統中。可以通過任何形式或者介質的數位資料通訊(例如,通訊網路)來將系統的部件相互連接。通訊網路的示例包含:區域網路(Local Area Network,LAN)、廣域網路(Wide Area Network,WAN)、區塊鏈網路和網際網路。The systems and techniques described herein may be implemented in a computing system that includes a backend component (e.g., as a data server), or a computing system that includes a middleware component (e.g., an application server), or a computing system that includes a frontend component (e.g., a user computer with a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described herein), or a computing system that includes any combination of such backend components, middleware components, or frontend components. The components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN), blockchain network, and the Internet.

計算系統可以包含使用者端和伺服器。使用者端和伺服器一般遠離彼此並且通常通過通訊網路進行交互。通過在相應的電腦上運行並且彼此具有使用者端-伺服器關係的電腦程式來產生使用者端和伺服器的關係。伺服器可以是雲端伺服器,又稱為雲端計算伺服器或雲端主機,是雲端計算服務體系中的一項主機產品,以解決了傳統物理主機與VPS服務中,存在的管理難度大,業務擴展性弱的缺陷。A computing system may include a client and a server. The client and the server are generally remote from each other and usually interact through a communication network. The client-server relationship is generated by computer programs running on the corresponding computers and having a client-server relationship with each other. The server may be a cloud server, also known as a cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the defects of difficult management and weak business scalability in traditional physical hosts and VPS services.

應該理解,可以使用如上所示之各種形式的流程,重新排序、增加或刪除步驟。例如,本發明中記載的各步驟可以並行地執行亦可順序地執行也可以不同的次序執行,只要能夠實現本發明的技術手段所期望的結果,本說明書在此不進行限制。It should be understood that various forms of processes as shown above can be used to reorder, add or delete steps. For example, the steps described in the present invention can be executed in parallel, sequentially or in different orders, as long as the expected results of the technical means of the present invention can be achieved, and this specification does not limit it here.

上述具體實施方式,並不構成對本發明保護範圍的限制。所屬技術領域中具有通常知識者應該明白的是,根據設計要求和其他因素,可以進行各種修改、組合、子組合和替代。任何在本發明的精神和原則之內所作的修改、均等替換和改進等,均應包含在本發明保護範圍之內。The above specific implementation does not constitute a limitation on the protection scope of the present invention. Those with ordinary knowledge in the relevant technical field should understand that various modifications, combinations, sub-combinations and substitutions can be made according to design requirements and other factors. Any modification, equivalent substitution and improvement made within the spirit and principles of the present invention should be included in the protection scope of the present invention.

本發明要求在2022年11月25日提交中國專利局、申請號為202211493520.1的中國專利申請的優先權,以上申請的全部內容通過引用結合在本發明中。This invention claims priority to the Chinese patent application filed with the China Patent Office on November 25, 2022, with application number 202211493520.1. The entire contents of the above application are incorporated by reference in this invention.

10:電子設備 11:處理器 12:唯讀記憶體(ROM) 13:隨機存取記憶體(RAM) 14:匯流排 15:輸入/輸出(I/O)介面 16:輸入單元 17:輸出單元 18:記憶單元 19:通訊單元 300:裝置 310:定位資訊獲取模組 320:拍攝設備確定模組 330:靶心圖表像確定模組 340:圖像場景構造模組 10: Electronic equipment 11: Processor 12: Read-only memory (ROM) 13: Random access memory (RAM) 14: Bus 15: Input/output (I/O) interface 16: Input unit 17: Output unit 18: Memory unit 19: Communication unit 300: Device 310: Positioning information acquisition module 320: Shooting equipment determination module 330: Bull's eye image determination module 340: Image scene construction module

為了更清楚地說明本發明實施例中的技術手段,下面將對實施例描述中所需要使用的圖式作簡單地介紹,顯而易見地,下面描述中的圖式僅僅是本發明的一些實施例,對於所屬技術領域中具有通常知識者來講,在不付出創造性勞動的前提下,還可以根據這些圖式獲得其他的圖式。 〔圖1〕是根據本發明實施例一提供的一種圖像場景構造方法的流程圖。 〔圖2〕是根據本發明實施例二所適用的拍攝設備的示意圖。 〔圖3〕是根據本發明實施例三提供的一種圖像場景構造裝置的結構示意圖。 〔圖4〕是實現本發明實施例的圖像場景構造方法的電子設備的結構示意圖。 In order to more clearly explain the technical means in the embodiments of the present invention, the following will briefly introduce the diagrams required for the description of the embodiments. Obviously, the diagrams described below are only some embodiments of the present invention. For those with ordinary knowledge in the relevant technical field, other diagrams can be obtained based on these diagrams without creative labor. [Figure 1] is a flow chart of an image scene construction method provided according to the first embodiment of the present invention. [Figure 2] is a schematic diagram of a shooting device applicable to the second embodiment of the present invention. [Figure 3] is a structural schematic diagram of an image scene construction device provided according to the third embodiment of the present invention. [Figure 4] is a structural schematic diagram of an electronic device that implements the image scene construction method of the embodiment of the present invention.

S110、S120、S130、S140:步驟 S110, S120, S130, S140: Steps

Claims (10)

一種圖像場景構造方法,其特徵係包含: 獲取目標設備的目標定位資訊; 根據該目標定位資訊,確定以該目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 根據該場景拍攝設備的位置和視角資訊,確定靶心圖表像; 根據該靶心圖表像,對圖像場景進行構造。 A method for constructing an image scene, characterized by: Obtaining target positioning information of a target device; Determining a scene shooting device within a preset radius centered on the target positioning information based on the target positioning information; Determining a bull's eye image based on the position and viewing angle information of the scene shooting device; Constructing an image scene based on the bull's eye image. 如請求項1所述之方法,其中,該預設半徑包含第一預設半徑;相應的,該根據該場景拍攝設備的位置和視角資訊,確定靶心圖表像,包含: 根據該場景拍攝設備的位置和視角資訊,從該第一預設半徑範圍中篩選出目標拍攝設備; 當該目標拍攝設備的數量符合預設數量閾值時,將該目標拍攝設備所拍攝的圖像,作為靶心圖表像。 The method as described in claim 1, wherein the preset radius includes a first preset radius; correspondingly, determining the bull's eye image according to the position and viewing angle information of the scene shooting device includes: According to the position and viewing angle information of the scene shooting device, filtering the target shooting device from the first preset radius range; When the number of the target shooting devices meets the preset number threshold, the image captured by the target shooting device is used as the bull's eye image. 如請求項2所述之方法,其中,該根據該場景拍攝設備的位置和視角資訊,從該第一預設半徑範圍中篩選出目標拍攝設備,包含: 若該場景拍攝設備距離該目標設備在該第一預設半徑之內,並且,該場景拍攝設備的該視角資訊中存在該目標設備,則將該場景拍攝設備確定為該目標拍攝設備。 The method as described in claim 2, wherein the target shooting device is selected from the first preset radius according to the position and viewing angle information of the scene shooting device, comprising: If the scene shooting device is within the first preset radius from the target device, and the target device exists in the viewing angle information of the scene shooting device, the scene shooting device is determined as the target shooting device. 如請求項2所述之方法,其中,該根據該場景拍攝設備的位置和視角資訊,從該第一預設半徑範圍中篩選出目標拍攝設備,包含: 若該場景拍攝設備距離該目標設備為該第一預設半徑,並且,該場景拍攝設備的該視角資訊中存在該目標設備,則將該場景拍攝設備確定為該目標拍攝設備。 The method as described in claim 2, wherein the target shooting device is selected from the first preset radius range according to the position and viewing angle information of the scene shooting device, comprising: If the scene shooting device is at the first preset radius away from the target device, and the target device exists in the viewing angle information of the scene shooting device, the scene shooting device is determined to be the target shooting device. 如請求項2至4中任一項所述之方法,其中,該預設半徑包含第二預設半徑,且該第二預設半徑大於該第一預設半徑;相應的,在該根據該場景拍攝設備的位置和視角資訊,確定靶心圖表像之後,還包含: 若該場景拍攝設備與該目標設備的距離介於該第一預設半徑和該第二預設半徑之間,或者,該場景拍攝設備的該視角資訊中不存在該目標設備,則將該場景拍攝設備作為輔助拍攝設備。 A method as described in any one of claims 2 to 4, wherein the preset radius includes a second preset radius, and the second preset radius is greater than the first preset radius; accordingly, after determining the bull's eye image according to the position and viewing angle information of the scene shooting device, it also includes: If the distance between the scene shooting device and the target device is between the first preset radius and the second preset radius, or the target device does not exist in the viewing angle information of the scene shooting device, the scene shooting device is used as an auxiliary shooting device. 如請求項5所述之方法,其中,該根據該靶心圖表像,對圖像場景進行構造,包含: 根據該靶心圖表像和該輔助拍攝設備所拍攝的輔助圖像,對圖像場景進行構造。 The method as described in claim 5, wherein the image scene is constructed based on the bull's eye image, comprising: The image scene is constructed based on the bull's eye image and the auxiliary image captured by the auxiliary shooting device. 如請求項1至4中任一項所述之方法,在該根據該靶心圖表像,對圖像場景進行構造之後,該方法還包含: 將該圖像場景投射至該目標設備進行展示。 The method as described in any one of claims 1 to 4, after constructing the image scene according to the bull's eye image, the method further comprises: Projecting the image scene to the target device for display. 一種圖像場景構造裝置,其特徵係包含: 定位資訊獲取模組,用於獲取目標設備的目標定位資訊; 拍攝設備確定模組,用於根據該目標定位資訊,確定以該目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 靶心圖表像確定模組,用於根據該場景拍攝設備的位置和視角資訊,確定靶心圖表像; 圖像場景構造模組,用於根據該靶心圖表像,對圖像場景進行構造。 An image scene construction device, characterized by comprising: A positioning information acquisition module, used to obtain target positioning information of a target device; A shooting device determination module, used to determine a scene shooting device within a preset radius centered on the target positioning information according to the target positioning information; A bull's eye image determination module, used to determine a bull's eye image according to the position and viewing angle information of the scene shooting device; An image scene construction module, used to construct an image scene according to the bull's eye image. 一種電子設備,其特徵係包含: 至少一個處理器;以及 與該至少一個處理器通訊連接的記憶體;其中, 該記憶體儲存有可被該至少一個處理器執行的電腦程式,該電腦程式被該至少一個處理器執行,以使該至少一個處理器能夠執行請求項1至7中任一項所述之圖像場景構造方法。 An electronic device, characterized by comprising: At least one processor; and A memory connected to the at least one processor in communication; wherein, The memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor so that the at least one processor can execute the image scene construction method described in any one of claims 1 to 7. 一種電腦可讀儲存介質,該電腦可讀儲存介質儲存有電腦指令,該電腦指令用於使處理器執行時實現請求項1至7中任一項所述之圖像場景構造方法。A computer-readable storage medium stores computer instructions, which are used to enable a processor to implement the image scene construction method described in any one of claims 1 to 7 when executing the computer instructions.
TW112102524A 2022-11-25 2023-01-19 Image scene construction method, apparatus, electronic equipment and storage medium TWI833560B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211493520.1A CN115713614A (en) 2022-11-25 2022-11-25 Image scene construction method and device, electronic equipment and storage medium
CN202211493520.1 2022-11-25

Publications (2)

Publication Number Publication Date
TWI833560B TWI833560B (en) 2024-02-21
TW202422478A true TW202422478A (en) 2024-06-01

Family

ID=85234811

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112102524A TWI833560B (en) 2022-11-25 2023-01-19 Image scene construction method, apparatus, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115713614A (en)
TW (1) TWI833560B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI526992B (en) * 2015-01-21 2016-03-21 國立清華大學 Method for optimizing occlusion in augmented reality based on depth camera
CN107315470B (en) * 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 Graphic processing method, processor and virtual reality system
CN110132242B (en) * 2018-02-09 2021-11-02 驭势科技(北京)有限公司 Triangularization method for multi-camera instant positioning and map construction and moving body thereof
CN110585704B (en) * 2019-09-20 2021-04-09 腾讯科技(深圳)有限公司 Object prompting method, device, equipment and storage medium in virtual scene
US11818326B2 (en) * 2020-05-13 2023-11-14 Mediatek Singapore Pte. Ltd. Methods and apparatus for signaling viewing regions of various types in immersive media
KR20230155445A (en) * 2021-03-10 2023-11-10 퀄컴 인코포레이티드 Object collision data for virtual cameras in a virtual interactive scene defined by streamed media data

Also Published As

Publication number Publication date
TWI833560B (en) 2024-02-21
CN115713614A (en) 2023-02-24

Similar Documents

Publication Publication Date Title
US11748906B2 (en) Gaze point calculation method, apparatus and device
WO2020001168A1 (en) Three-dimensional reconstruction method, apparatus, and device, and storage medium
CN112311965B (en) Virtual shooting method, device, system and storage medium
EP2328125B1 (en) Image splicing method and device
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
CN114223195A (en) System and method for video communication using virtual camera
WO2018014601A1 (en) Method and relevant apparatus for orientational tracking, method and device for realizing augmented reality
US20120293613A1 (en) System and method for capturing and editing panoramic images
CN110296686B (en) Vision-based positioning method, device and equipment
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
US11539935B2 (en) Videotelephony with parallax effect
CN112270702A (en) Volume measurement method and device, computer readable medium and electronic equipment
WO2021097843A1 (en) Three-dimensional reconstruction method and device, system and storage medium
CN112740261A (en) Panoramic light field capture, processing and display
WO2023169283A1 (en) Method and apparatus for generating binocular stereoscopic panoramic image, device, storage medium, and product
WO2024104248A1 (en) Rendering method and apparatus for virtual panorama, and device and storage medium
CN115690382A (en) Training method of deep learning model, and method and device for generating panorama
CN112288878B (en) Augmented reality preview method and preview device, electronic equipment and storage medium
CN107155065A (en) A kind of virtual photograph device and method
EP4186033A2 (en) Map for augmented reality
WO2022228119A1 (en) Image acquisition method and apparatus, electronic device, and medium
CN112637519A (en) Panoramic stitching algorithm for multi-path 4K quasi-real-time stitched video
TW202221646A (en) Depth detection method, electronic equipment, and computer-readable storage medium
TWI833560B (en) Image scene construction method, apparatus, electronic equipment and storage medium
CN115908218A (en) Third-view shooting method, device, equipment and storage medium for XR scene