TWI833560B - Image scene construction method, apparatus, electronic equipment and storage medium - Google Patents

Image scene construction method, apparatus, electronic equipment and storage medium Download PDF

Info

Publication number
TWI833560B
TWI833560B TW112102524A TW112102524A TWI833560B TW I833560 B TWI833560 B TW I833560B TW 112102524 A TW112102524 A TW 112102524A TW 112102524 A TW112102524 A TW 112102524A TW I833560 B TWI833560 B TW I833560B
Authority
TW
Taiwan
Prior art keywords
scene
image
target
shooting
shooting device
Prior art date
Application number
TW112102524A
Other languages
Chinese (zh)
Other versions
TW202422478A (en
Inventor
雲昊
許國軍
Original Assignee
大陸商立訊精密科技(南京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商立訊精密科技(南京)有限公司 filed Critical 大陸商立訊精密科技(南京)有限公司
Application granted granted Critical
Publication of TWI833560B publication Critical patent/TWI833560B/en
Publication of TW202422478A publication Critical patent/TW202422478A/en

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present invention provides an image scene construction method, apparatus, electronic equipment and storage medium. The present invention specifically comprises: obtaining a target positioning information of a target equipment; based on the target positioning information, determining a scene shooting equipment in a default radius taking the target positioning information as a center; based on a position and an angle of view information of the scene shooting equipment, determining a target image; and based on the target image, constructing the image scene . In the technical means of the present invention embodiment, the equipment that can be used for image acquisition for constructing AR scenes can be flexibly determined in real time with the different positions of the target equipment, which improves the flexibility and practicality of the image scene construction. At the same time, it is able to precisely construct the image scene according to the different positions and angles of view, which results in that the generated third-person AR scene reproduces the real world relatively well and improves the accuracy of the image scene construction.

Description

一種圖像場景構造方法、裝置、電子設備和儲存介質An image scene construction method, device, electronic equipment and storage medium

本發明係關於影像處理技術領域,尤其係關於一種圖像場景構造方法、裝置、電子設備和儲存介質。The present invention relates to the field of image processing technology, and in particular to an image scene construction method, device, electronic equipment and storage medium.

隨著虛擬實境(Virtual Reality,VR)和擴增實境(Augmented Reality,AR)等一系列技術的發展,越來越多的行業開始採用這些多媒體技術進行三維建模和智慧交互。尤其是對於圖像場景的構建,廣泛應用於交通、遊戲等行業或領域,為廣大使用者帶來了良好的使用體驗。With the development of a series of technologies such as Virtual Reality (VR) and Augmented Reality (AR), more and more industries are beginning to use these multimedia technologies for 3D modeling and intelligent interaction. Especially for the construction of image scenes, it is widely used in transportation, games and other industries or fields, bringing a good user experience to the majority of users.

當前,對於第三視角的三維圖像場景的構建方法一般採用錄製設備和定位設備對使用者所在的空間範圍進行圖像或視頻的錄製,並通過後期軟體進行場景合成,為使用者提供第三人稱視角的場景顯示。但是,這種方法僅適用於室內便於佈置錄製設備的地方,例如在室內玩AR遊戲的情況。因此,該方法的使用較為局限、靈活性較差。Currently, methods for constructing third-perspective three-dimensional image scenes generally use recording equipment and positioning equipment to record images or videos in the spatial range where the user is located, and then use post-processing software to synthesize the scene to provide the user with a third-dimensional image scene. Scene display from person perspective. However, this method is only suitable for indoor places where recording equipment is conveniently arranged, such as when playing AR games indoors. Therefore, the use of this method is more limited and less flexible.

本發明提供了一種圖像場景構造方法、裝置、電子設備和儲存介質,以提高第三人稱視角的圖像場景構造的靈活性。The present invention provides an image scene construction method, device, electronic equipment and storage medium to improve the flexibility of image scene construction from a third-person perspective.

根據本發明的一態樣,提供了一種圖像場景構造方法,所述方法包含: 獲取目標設備的目標定位資訊; 根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 根據場景拍攝設備的位置和視角資訊,確定靶心圖表像; 根據靶心圖表像,對圖像場景進行構造。 According to an aspect of the present invention, an image scene construction method is provided, which method includes: Obtain targeting information of the target device; According to the target positioning information, determine the scene shooting equipment within a preset radius centered on the target positioning information; Determine the bullseye chart image based on the location and perspective information of the scene shooting equipment; Construct the image scene based on the bullseye chart image.

根據本發明的另一態樣,提供了一種圖像場景構造裝置,包含: 定位資訊獲取模組,用於獲取目標設備的目標定位資訊; 拍攝設備確定模組,用於根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 靶心圖表像確定模組,用於根據場景拍攝設備的位置和視角資訊,確定靶心圖表像; 圖像場景構造模組,用於根據靶心圖表像,對圖像場景進行構造。 According to another aspect of the present invention, an image scene construction device is provided, including: The positioning information acquisition module is used to obtain the target positioning information of the target device; The shooting equipment determination module is used to determine scene shooting equipment within a preset radius centered on the target positioning information based on the target positioning information; The bull's-eye chart image determination module is used to determine the bull's-eye chart image based on the location and perspective information of the scene shooting equipment; The image scene construction module is used to construct image scenes based on bullseye chart images.

根據本發明的另一態樣,提供了一種電子設備,所述電子設備包含: 至少一個處理器;以及 與所述至少一個處理器通訊連接的記憶體;其中, 所述記憶體儲存有可被所述至少一個處理器執行的電腦程式,所述電腦程式被所述至少一個處理器執行,以使所述至少一個處理器能夠執行本發明任一實施例所述的圖像場景構造方法。 According to another aspect of the present invention, an electronic device is provided, and the electronic device includes: at least one processor; and A memory communicatively connected to the at least one processor; wherein, The memory stores a computer program that can be executed by the at least one processor, and the computer program is executed by the at least one processor, so that the at least one processor can execute any embodiment of the present invention. Image scene construction method.

根據本發明的另一態樣,提供一種電腦可讀儲存介質,所述電腦可讀儲存介質儲存有電腦指令,所述電腦指令用於使處理器執行時實現本發明任一實施例所述的圖像場景構造方法。According to another aspect of the present invention, a computer-readable storage medium is provided. The computer-readable storage medium stores computer instructions. The computer instructions are used to enable a processor to implement any embodiment of the present invention when executed. Image scene construction method.

本發明實施例的技術手段中,根據目標設備的目標定位資訊,確定以其為中心的預設半徑範圍內的場景拍攝設備,這樣做可以跟隨目標設備的位置不同,即時的、靈活的確定可以用於構造AR場景的圖像獲取的設備,提高了圖像場景構造的靈活性和實用性;同時,根據所述場景拍攝設備的位置和視角資訊,確定靶心圖表像以構造圖像場景,能夠根據位置的不同和視角的不同,對圖像場景進行精確的構造,使得生成的第三人稱AR場景較為還原現實世界,提高了圖像場景構造的準確性。In the technical means of the embodiment of the present invention, according to the target positioning information of the target device, the scene shooting device within the preset radius range centered on it is determined. In this way, the position of the target device can be followed, and the real-time and flexible determination can be The device used to construct the image acquisition of the AR scene improves the flexibility and practicality of the image scene construction; at the same time, according to the position and perspective information of the scene shooting device, the bullseye chart image is determined to construct the image scene, which can According to the different positions and angles of view, the image scene is accurately constructed, so that the generated third-person AR scene more accurately restores the real world and improves the accuracy of the image scene construction.

應當理解,本部分所描述的內容並非旨在標識本發明的實施例的關鍵或重要特徵,也不用於限制本發明的範圍。本發明的其它特徵將通過以下的說明書而變得容易理解。It should be understood that what is described in this section is not intended to identify key or important features of the embodiments of the invention, nor is it intended to limit the scope of the invention. Other features of the present invention will become easily understood from the following description.

為了使所屬技術領域中具有通常知識者更好地理解本發明方案,下面將結合本發明實施例中的圖式,對本發明實施例中的技術手段進行清楚、完整地描述,顯然,所描述的實施例僅僅是本發明一部分的實施例,而不是全部的實施例。基於本發明中的實施例,所屬技術領域中具有通常知識者在沒有做出創造性勞動前提下所獲得的所有其他實施例,都應當屬於本發明保護的範圍。In order to enable those with ordinary knowledge in the technical field to better understand the solutions of the present invention, the technical means in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described The embodiments are only some of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those with ordinary skill in the technical field without making creative efforts should fall within the scope of protection of the present invention.

需要說明的是,本發明的說明書和申請專利範圍及上述圖式中的術語「第一」、「第二」等是用於區別類似的物件,而不必用於描述特定的順序或先後次序。應該理解這樣使用的資料在適當情況下可以互換,以便這裡描述的本發明的實施例能夠以除了在這裡圖示或描述的那些以外的順序實施。此外,術語「包含」和「具有」及他們的任何變形,意圖在於覆蓋不排他的包含,例如,包含一系列步驟或單元的過程、方法、系統、產品或設備不必限於清楚地列出的那些步驟或單元,而是可包含沒有清楚地列出的或對於這些過程、方法、產品或設備固有的其它步驟或單元。 [實施例一] It should be noted that the terms "first", "second", etc. in the description and patent scope of the present invention and the above drawings are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. It is to be understood that the materials so used are interchangeable under appropriate circumstances so that the embodiments of the invention described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusions, e.g., processes, methods, systems, products or devices that comprise a series of steps or units need not be limited to those expressly listed steps or units, but may include other steps or units not expressly listed or inherent to such processes, methods, products or apparatuses. [Example 1]

圖1為本發明實施例一提供了一種圖像場景構造方法的流程圖,本實施例可適用於在現實環境中構建第三人稱的擴增現實畫面的情況,該方法可以由圖像場景構造裝置來執行,該圖像場景構造裝置可以採用硬體及/或軟體的形式實現,該圖像場景構造裝置可配置於電子設備中。如圖1所示,該方法包含: S110、獲取目標設備的目標定位資訊。 Figure 1 is a flow chart of an image scene construction method provided in Embodiment 1 of the present invention. This embodiment can be applied to the situation of constructing a third-person augmented reality picture in a real environment. The method can be constructed from an image scene. The image scene construction device can be implemented in the form of hardware and/or software, and the image scene construction device can be configured in an electronic device. As shown in Figure 1, the method includes: S110. Obtain the target positioning information of the target device.

其中,目標設備可以是亟待進行第三人稱三維場景展示的設備,例如AR(Augmented Reality,擴增現實)眼鏡等設備。目標定位資訊可以是目標設備在既定空間中的位置資料,也可以是根據導航衛星進行定位的具體地理位置資訊。例如在戶外,目標設備可以通過內置的導航晶片與導航衛星進行通訊,並通過後台伺服器進行目標設備的座標資料的獲取,即目標定位資訊的獲取。The target device may be a device that urgently needs to display a third-person three-dimensional scene, such as AR (Augmented Reality, augmented reality) glasses and other devices. The target positioning information can be the location data of the target device in a given space, or it can be specific geographical location information based on navigation satellite positioning. For example, outdoors, the target device can communicate with the navigation satellite through the built-in navigation chip, and obtain the coordinate data of the target device through the background server, that is, obtain the target positioning information.

S120、根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備。S120. According to the target positioning information, determine the scene shooting equipment within a preset radius centered on the target positioning information.

其中,場景拍攝設備可以是任何可以進行圖像拍攝或視頻錄製的硬體設備,例如與目標設備在同一既定空間或在地理位置上相處較近距離的拍攝設備(可以同樣為AR眼鏡,也可以是具備相機的其他設備,可以是移動的拍攝設備,也可以是固定的拍攝設備,本發明實施例對此不作限定)。以目標設備(目標定位資訊)為圓心,以預設長度作為半徑距離範圍內進行場景拍攝設備的確定;也即,在目標定位資訊為中心的預設半徑範圍內,確定除了目標設備以外的其他拍攝設備。當然,預設半徑的長度可以由相關技術人員根據具體情況或者人工經驗進行設置,例如可以設置為10公尺或20公尺等。實際上,各個場景拍攝設備中也內置有導航晶片,後台伺服器可以根據前述步驟中確定的目標設備的目標定位資訊,獲取周圍預設半徑範圍內的所有拍攝設備的資訊(可以不止有定位資訊等)。Among them, the scene shooting device can be any hardware device that can capture images or video recordings, such as a shooting device that is in the same given space or geographically close to the target device (it can also be AR glasses, or it can be It is other equipment equipped with a camera, and it may be a mobile shooting device or a fixed shooting device, which is not limited in the embodiment of the present invention). With the target device (target positioning information) as the center of the circle and the preset length as the radius distance, the scene shooting device is determined; that is, within the preset radius with the target positioning information as the center, other devices other than the target device are determined. Filming equipment. Of course, the length of the preset radius can be set by relevant technical personnel according to specific conditions or manual experience, for example, it can be set to 10 meters or 20 meters. In fact, each scene shooting device also has a built-in navigation chip. The background server can obtain information about all shooting devices within a preset radius based on the target positioning information of the target device determined in the previous steps (there may be more than just positioning information). wait).

S130、根據場景拍攝設備的位置和視角資訊,確定靶心圖表像。S130. Determine the bullseye chart image based on the position and perspective information of the scene shooting device.

在前述步驟中確定場景拍攝設備後,可以獲取這些場景拍攝設備的定位資訊(即位置,或與目標設備之間的相對位置)。視角資訊可以包含場景拍攝設備的相機視角(例如可以是25°到124°之間,根據場景拍攝設備的硬體情況區別而不同);視角資訊還可以包含視角的方向,例如視角中間的正方向。視角資訊內的圖像資訊均可以被場景拍攝設備所獲取,獲取的介質可以是圖像或者視頻。靶心圖表像可以是由場景拍攝設備拍攝的場景圖像或視頻中的一幀,當然需要說明的是,靶心圖表片中可以包含目標設備,這樣才能有助於建立目標設備的第三人稱視角的AR場景。After the scene shooting devices are determined in the previous steps, the positioning information (ie, the position, or the relative position to the target device) of these scene shooting devices can be obtained. The angle of view information can include the camera angle of view of the scene shooting device (for example, it can be between 25° and 124°, depending on the hardware of the scene shooting device); the angle of view information can also include the direction of the angle of view, such as the positive direction in the middle of the angle of view. . The image information in the perspective information can be obtained by the scene shooting device, and the obtained medium can be an image or a video. The bullseye chart image can be a scene image or a frame in the video captured by the scene shooting device. Of course, it should be noted that the target device can be included in the bullseye chart image, so as to help establish a third-person perspective of the target device. AR scene.

可選的,所述預設半徑包含第一預設半徑;相應的,所述根據場景拍攝設備的位置和視角資訊,確定靶心圖表像,可以包含:根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備;當目標拍攝設備的數量符合預設數量閾值時,將目標拍攝設備所拍攝的圖像,作為靶心圖表像。Optionally, the preset radius includes a first preset radius; correspondingly, determining the bullseye image according to the location and perspective information of the scene shooting device may include: based on the location and perspective information of the scene shooting device, from Target shooting devices are screened out from the first preset radius range; when the number of target shooting devices meets the preset quantity threshold, the image captured by the target shooting device is used as a bullseye chart image.

其中,第一預設半徑可以是用於確定第三人稱AR視角的主要拍攝設備的半徑範圍。例如,第一預設半徑可以設定為10公尺,也就是說,以目標設備為圓心,半徑10公尺的範圍內的所有場景拍攝設備,均可以作為用於為目標設備構建第三人稱視角的AR場景的拍攝設備。Wherein, the first preset radius may be the radius range of the main shooting device used to determine the third-person AR perspective. For example, the first preset radius can be set to 10 meters. That is to say, with the target device as the center of the circle, all scene shooting devices within a radius of 10 meters can be used to construct a third-person perspective for the target device. Shooting equipment for AR scenes.

但是,數量較少的場景拍攝設備所拍攝的圖像很有可能無法通過後期的處理構建AR場景圖像。因此,需要預先設定一個數量閾值,當第一預設半徑範圍內的場景拍攝設備超過這個數量閾值,才進行靶心圖表像的獲取,從而為後續構建AR場景圖像提供素材,例如可以設置該數量閾值為5個。還有需要解釋的是,由於目標設備可以是使用者佩戴的AR眼鏡等設備,因此在使用者移動的過程中,目標設備也在移動,所以需要後台伺服器根據目標設備的移動情況(即時定位),確定目標設備周圍第一預設半徑範圍內的場景拍攝設備的數量。However, the images captured by a small number of scene shooting devices may not be able to construct AR scene images through post-processing. Therefore, a quantity threshold needs to be set in advance. When the scene shooting equipment within the first preset radius exceeds this quantity threshold, the bullseye chart image will be acquired, thereby providing material for subsequent construction of AR scene images. For example, the number can be set The threshold is 5. It also needs to be explained that since the target device can be AR glasses and other devices worn by the user, when the user moves, the target device is also moving, so the background server needs to be based on the movement of the target device (real-time positioning ), determine the number of scene shooting devices within the first preset radius around the target device.

可以理解的是,視角中含有目標設備(和使用者)時,對應的圖像才能夠便於在後期合成時構造以目標設備(和使用者)為主體的第三人稱的AR圖像場景。因此,在一種可選實施方式中,所述根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備,可以包含:若場景拍攝設備距離目標設備在第一預設半徑之內,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。It can be understood that only when the target device (and user) is included in the perspective, the corresponding image can facilitate the construction of a third-person AR image scene with the target device (and user) as the subject during post-synthesis. Therefore, in an optional implementation, selecting the target shooting device from the first preset radius range according to the location and perspective information of the scene shooting device may include: if the scene shooting device is within a distance of the first preset radius from the target device, Assuming that the scene shooting device is within the radius and there is a target device in the perspective information of the scene shooting device, the scene shooting device is determined as the target shooting device.

可想而知,在第一預設半徑範圍內,能夠拍攝到目標設備的場景拍攝設備可以作為目標拍攝設備,獲取靶心圖表像,説明後期構建以目標設備為主體的第三人稱的AR場景圖像。It is conceivable that within the first preset radius, a scene shooting device that can capture the target device can be used as the target shooting device to obtain a bullseye chart image, illustrating the later construction of a third-person AR scene image with the target device as the main body. picture.

在另一種可選實施方式中,所述根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備,可以包含:若場景拍攝設備距離目標設備為第一預設半徑,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。In another optional implementation, selecting the target shooting device from the first preset radius range according to the location and viewing angle information of the scene shooting device may include: if the scene shooting device is at a distance of the first preset distance from the target device radius, and if the target device exists in the perspective information of the scene shooting device, the scene shooting device is determined as the target shooting device.

可以理解的是,在實際情況中,由於各個場景拍攝設備的視角相近(幾乎都是廣角),不同的場景拍攝設備距離目標設備和佩戴目標設備的使用者的遠近不同,會造成使用者在不同設備中成像大小不同(由於物理上的透視關係原理),導致後期進行圖像合成和場景構造時,由於目標主體(即佩戴目標設備的使用者)在不同圖像中的成像大小不同,給同一AR場景的構造造成了計算上的負擔,計算量大且容易出錯,更多的計算資源被消耗在恢復目標主體的形象上。It is understandable that in actual situations, since the viewing angles of various scene shooting devices are similar (almost all are wide-angle), different scene shooting devices are at different distances from the target device and the user wearing the target device, which will cause the user to be in different situations. The different image sizes in the device (due to the physical perspective relationship principle) lead to the later image synthesis and scene construction. Due to the different image sizes of the target subject (i.e. the user wearing the target device) in different images, the same image will be affected. The construction of AR scenes creates a computational burden, is computationally intensive and error-prone, and more computing resources are consumed in restoring the image of the target subject.

因此,當場景拍攝設備距離目標設備的長度相同,例如均距離目標設備為第一預設半徑,並且,場景拍攝設備的視角資訊中存在目標設備(和使用者)時,對於後期的圖像合成和場景構造是十分有利的,可以進一步的減小計算量,提高AR場景構造的效率和準確性。Therefore, when the distance between the scene shooting device and the target device is the same, for example, the average distance from the target device is the first preset radius, and there is a target device (and user) in the perspective information of the scene shooting device, for later image synthesis It is very beneficial to scene construction, which can further reduce the amount of calculation and improve the efficiency and accuracy of AR scene construction.

進一步的,所述預設半徑包含第二預設半徑,且第二預設半徑大於第一預設半徑;相應的,在根據場景拍攝設備的位置和視角資訊,確定靶心圖表像之後,還可以包含:若場景拍攝設備與目標設備的距離介於第一預設半徑和第二預設半徑之間,或者,場景拍攝設備的視角資訊中不存在目標設備,則將場景拍攝設備作為輔助拍攝設備。Further, the preset radius includes a second preset radius, and the second preset radius is greater than the first preset radius; accordingly, after determining the bullseye chart image according to the position and perspective information of the scene shooting device, it is also possible to Including: If the distance between the scene shooting device and the target device is between the first preset radius and the second preset radius, or the target device does not exist in the perspective information of the scene shooting device, the scene shooting device will be used as the auxiliary shooting device. .

需要說明的是,在一種常見的情況中,只使用視角資訊中包含目標設備的場景拍攝設備,對使用者進行拍攝並構造AR場景,可能會導致使用者的背景或前景的物體和環境等缺少一些圖像資訊,使得最終構造的第三人稱的AR場景並不完整。因此,需要對這些環境和物體等圖像資訊進行補充。It should be noted that in a common situation, only using scene shooting equipment that contains the target device in the perspective information to shoot the user and construct the AR scene may result in the lack of objects and environments in the user's background or foreground. Some image information makes the final constructed third-person AR scene incomplete. Therefore, image information such as these environments and objects needs to be supplemented.

那麼存在兩種情況。其一,用於補充環境和物體等圖像資訊的輔助拍攝設備並不在所述第一預設半徑範圍中;其二,各輔助拍攝設備無論在不在第一預設半徑範圍內,其視角資訊中並不包含目標設備(和使用者)。可以理解的是,滿足上述兩種情況中的至少一種,即可作為輔助拍攝設備,獲取目標設備(和使用者)之外的場景圖像。從而能夠更好的補充第三人稱AR場景中的背景和前景的圖像資訊,使得AR場景的構造更加的精確、完善。當然由於各拍攝設備的圖像獲取有一定的距離限制或數量限制,因此可以設置大於第一預設半徑的第二預設半徑,從而在第二預設半徑範圍內確定可用的輔助拍攝設備是一種可行的實施方式,例如可以設置第二預設半徑為20公尺。Then there are two situations. First, the auxiliary shooting equipment used to supplement image information such as the environment and objects is not within the first preset radius range; secondly, regardless of whether the auxiliary shooting equipment is within the first preset radius range, its viewing angle information does not include the target device (and user). It can be understood that if at least one of the above two conditions is met, it can be used as an auxiliary shooting device to obtain scene images other than the target device (and the user). This can better supplement the image information of the background and foreground in the third-person AR scene, making the construction of the AR scene more accurate and complete. Of course, since there is a certain distance limit or quantity limit for image acquisition by each shooting device, a second preset radius larger than the first preset radius can be set to determine whether the available auxiliary shooting equipment is within the second preset radius. In a feasible implementation, for example, the second preset radius can be set to 20 meters.

S140、根據靶心圖表像,對圖像場景進行構造。S140. Construct the image scene based on the bullseye chart image.

基於上述各步驟及實施方式所得到的靶心圖表像,通過先前技術中任意一種後期影像處理技術,將平面圖像合稱為以目標設備(和使用者)為中心主體的第三人稱的AR場景。Based on the bullseye chart image obtained from the above steps and implementation methods, the planar images are collectively referred to as a third-person AR scene with the target device (and user) as the central subject through any post-image processing technology in the prior art. .

可選的,所述根據靶心圖表像,對圖像場景進行構造,可以包含:根據靶心圖表像和輔助拍攝設備所拍攝的輔助圖像,對圖像場景進行構造。Optionally, constructing the image scene based on the bull's-eye chart image may include: constructing the image scene based on the bull's-eye chart image and the auxiliary image captured by the auxiliary shooting device.

其中,靶心圖表像可以是包含目標設備的圖像,輔助圖像可以是不包含目標設備的圖像,僅通過包含目標設備的圖像合成AR場景可能會導致圖像資訊不全,失去一些前景或背景。因此,將靶心圖表像和輔助圖像結合,進行圖像場景的構造,則會補全全部的場景資訊,使得構造出的AR場景更為完善。Among them, the bullseye chart image can be an image that contains the target device, and the auxiliary image can be an image that does not contain the target device. Synthesizing the AR scene only through images that contain the target device may result in incomplete image information, loss of some prospects, or background. Therefore, combining the bullseye chart image and the auxiliary image to construct the image scene will complete all the scene information, making the constructed AR scene more complete.

進一步的,在根據靶心圖表像,對圖像場景進行構造之後,所述方法還可以包含:將圖像場景投射至目標設備進行展示。Further, after constructing the image scene according to the bullseye chart image, the method may also include: projecting the image scene to the target device for display.

構造好該圖像場景後,將圖像場景的視覺資訊發送至目標設備進行展示。例如可以將構造好的第三人稱的AR場景回饋至使用者佩戴的AR眼鏡中,使得使用者可以通過自己佩戴的AR眼鏡看到自己的第三人稱視角,從而提升使用者的使用體驗。After the image scene is constructed, the visual information of the image scene is sent to the target device for display. For example, the constructed third-person AR scene can be fed back to the AR glasses worn by the user, so that the user can see his or her third-person perspective through the AR glasses worn by the user, thereby improving the user's experience.

本發明實施例的技術手段中,根據目標設備的目標定位資訊,確定以以其為中心的預設半徑範圍內的場景拍攝設備,這樣做可以跟隨目標設備的位置不同,即時的、靈活的確定可以用於構造AR場景的圖像獲取的設備,提高了圖像場景構造的靈活性和實用性;同時,根據所述場景拍攝設備的位置和視角資訊,確定靶心圖表像以構造圖像場景,能夠根據位置的不同和視角的不同,對圖像場景進行精確的構造,使得生成的第三人稱AR場景較為還原現實世界,提高了圖像場景構造的準確性。 [實施例二] In the technical means of the embodiment of the present invention, according to the target positioning information of the target device, the scene shooting device within a preset radius centered on it is determined. In this way, the position of the target device can be followed and the position can be determined in real time and flexibly. The device that can be used to construct the image acquisition of the AR scene improves the flexibility and practicality of the image scene construction; at the same time, according to the position and perspective information of the scene shooting device, the bullseye chart image is determined to construct the image scene, The image scene can be accurately constructed according to different positions and perspectives, so that the generated third-person AR scene can more accurately restore the real world and improve the accuracy of image scene construction. [Example 2]

圖2為本發明實施例二提供的一種圖像場景構造方法的流程圖,本實施例是在前述各實施方式的基礎上提供的一種理想實施例。如圖2所示,該方法包含: 目標設備(例如使用者佩戴的AR眼鏡或其他設備)通過第三人稱應用程式的人機交互介面開啟第三人稱服務,然後該應用程式通過網路將目標設備的定位資訊上傳到伺服器端。 FIG. 2 is a flow chart of an image scene construction method provided in Embodiment 2 of the present invention. This embodiment is an ideal embodiment provided based on the foregoing embodiments. As shown in Figure 2, this method includes: The target device (such as AR glasses or other devices worn by the user) opens the third-person service through the human-computer interaction interface of the third-person application, and then the application uploads the positioning information of the target device to the server through the network.

伺服器搜索資料庫中目標設備位置周圍一定範圍內(AR設備通用相機焦距範圍,例如可以是10公尺)的其他AR設備,若搜索到的數量超過有效閾值(例如5個),則發送第三人稱服務支援請求到符合位置條件(即10公尺的範圍)的設備上,請求這些設備的視場角資訊。The server searches for other AR devices within a certain range around the target device location in the database (the general camera focal length range of AR devices, for example, it can be 10 meters). If the number searched exceeds the effective threshold (for example, 5), the third AR device is sent. The three-person service supports requesting the field of view information of devices that meet the location conditions (i.e., a range of 10 meters).

獲取視場角資訊後計算到目標設備距離相同,視場角正方向彼此差別最大(可以理解的是,視場角正方向差別較小的會導致獲取的視角資訊中包含的場景資訊比較重複),並且視角內包含目標設備(如圖2所示)的一組AR設備作為目標拍攝設備。而在有效範圍內的(AR通用相機最大有效拍攝範圍,例如可以是20公尺)其他AR設備則確定為輔助拍攝設備。分別對應發送開啟作為目標拍攝設備和輔助拍攝設備的第三人稱支援服務目標拍攝設備和輔助拍攝設備的差別可以是,目標拍攝設備採集的場景圖片只進行旋轉,剪裁和拼接處理,需要高圖元的照片;輔助拍攝設備採集的用來參考和確認場景物體和細節,需要進行剪裁,縮放和虛擬處理,不需要太高圖元。After obtaining the field of view information, the distance to the target device is calculated to be the same, and the positive direction of the field of view angles is the largest difference from each other (it is understandable that a small difference in the positive direction of the field of view will lead to duplication of the scene information contained in the obtained viewing angle information) , and a group of AR devices containing the target device (as shown in Figure 2) within the perspective are used as the target shooting device. Other AR devices within the effective range (the maximum effective shooting range of the AR universal camera, which can be 20 meters, for example) are determined to be auxiliary shooting devices. The third-person support service corresponding to the sending and opening of the target shooting device and the auxiliary shooting device respectively. The difference between the target shooting device and the auxiliary shooting device can be that the scene pictures collected by the target shooting device are only rotated, cropped and spliced, which requires high image elements. Photos; collected by auxiliary shooting equipment to reference and confirm scene objects and details, which need to be cropped, zoomed and virtual processed, and do not require too high graphics elements.

目標拍攝設備和輔助拍攝設備開啟AR前置攝像機進行10Hz速度連拍,並發送圖片和視場角資訊至伺服器端,伺服器每秒或每幀對視場角是否依然包含目標設備位置進行判斷,然後對所有目標拍攝設備的圖片剪裁,缺少的部分和不清楚的部分通過輔助拍攝設備的圖片進行渲染,形成3D場景圖片。然後,結合目標設備所在位置和使用者形象建模和渲染出以使用者為中心的3D場景圖片。The target shooting equipment and auxiliary shooting equipment turn on the AR front camera for continuous shooting at 10Hz speed, and send pictures and field of view information to the server. The server determines whether the field of view still includes the position of the target device every second or frame. , and then crop the pictures of all target shooting devices, and the missing and unclear parts are rendered through the pictures of the auxiliary shooting device to form a 3D scene picture. Then, a user-centered 3D scene image is modeled and rendered based on the location of the target device and the user's image.

伺服器通過網路,即時的將構造的場景圖像發送到服務請求設備端(即目標設備),目標設備將這些圖像投射到AR鏡片上完成第三人稱場景切換,使得使用者可以通過自己佩戴的AR眼鏡看到以自己為中心的三維的第三人稱AR場景。 [實施例三] The server instantly sends the constructed scene images to the service requesting device (i.e., the target device) through the network. The target device projects these images onto the AR lenses to complete third-person scene switching, allowing the user to Wearing AR glasses allows you to see a three-dimensional third-person AR scene centered on yourself. [Example 3]

圖3為本發明實施例三提供的一種圖像場景構造裝置的結構示意圖。如圖3所示,該裝置300包含: 定位資訊獲取模組310,用於獲取目標設備的目標定位資訊; 拍攝設備確定模組320,用於根據目標定位資訊,確定以目標定位資訊為中心的預設半徑範圍內的場景拍攝設備; 靶心圖表像確定模組330,用於根據場景拍攝設備的位置和視角資訊,確定靶心圖表像; 圖像場景構造模組340,用於根據靶心圖表像,對圖像場景進行構造。 Figure 3 is a schematic structural diagram of an image scene construction device provided in Embodiment 3 of the present invention. As shown in Figure 3, the device 300 includes: The positioning information acquisition module 310 is used to obtain the target positioning information of the target device; The shooting equipment determination module 320 is used to determine scene shooting equipment within a preset radius centered on the target positioning information based on the target positioning information; The bull's-eye chart image determination module 330 is used to determine the bull's-eye chart image based on the position and perspective information of the scene shooting device; The image scene construction module 340 is used to construct the image scene based on the bullseye chart image.

本發明實施例的技術手段中,根據目標設備的目標定位資訊,確定以其為中心的預設半徑範圍內的場景拍攝設備,這樣做可以跟隨目標設備的位置不同,即時的、靈活的確定可以用於構造AR場景的圖像獲取的設備,提高了圖像場景構造的靈活性和實用性;同時,根據所述場景拍攝設備的位置和視角資訊,確定靶心圖表像以構造圖像場景,能夠根據位置的不同和視角的不同,對圖像場景進行精確的構造,使得生成的第三人稱AR場景較為還原現實世界,提高了圖像場景構造的準確性。In the technical means of the embodiment of the present invention, according to the target positioning information of the target device, the scene shooting device within the preset radius range centered on it is determined. In this way, the position of the target device can be followed, and the real-time and flexible determination can be The device used to construct the image acquisition of the AR scene improves the flexibility and practicality of the image scene construction; at the same time, according to the position and perspective information of the scene shooting device, the bullseye chart image is determined to construct the image scene, which can According to the different positions and angles of view, the image scene is accurately constructed, so that the generated third-person AR scene more accurately restores the real world and improves the accuracy of the image scene construction.

在一種可選實施方式中,所述預設半徑包含第一預設半徑;相應的,所述靶心圖表像確定模組330,可以包含: 目標設備篩選單元,用於根據場景拍攝設備的位置和視角資訊,從第一預設半徑範圍中篩選出目標拍攝設備; 靶心圖表像確定單元,用於當目標拍攝設備的數量符合預設數量閾值時,將目標拍攝設備所拍攝的圖像,作為靶心圖表像。 In an optional implementation, the preset radius includes a first preset radius; accordingly, the bullseye chart image determination module 330 may include: The target device screening unit is used to filter out the target shooting device from the first preset radius range based on the location and perspective information of the scene shooting device; The bull's-eye chart image determination unit is configured to use the image captured by the target shooting device as a bull's-eye chart image when the number of target shooting devices meets a preset quantity threshold.

在一種可選實施方式中,所述目標設備篩選單元可以具體用於: 若場景拍攝設備距離目標設備在第一預設半徑之內,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。 In an optional implementation, the target device screening unit may be specifically used to: If the scene shooting device is within a first preset radius from the target device, and the target device exists in the perspective information of the scene shooting device, the scene shooting device is determined as the target shooting device.

在一種可選實施方式中,所述目標設備篩選單元可以具體用於: 若場景拍攝設備距離目標設備為第一預設半徑,並且,場景拍攝設備的視角資訊中存在目標設備,則將場景拍攝設備確定為目標拍攝設備。 In an optional implementation, the target device screening unit may be specifically used to: If the distance between the scene shooting device and the target device is the first preset radius, and the target device exists in the perspective information of the scene shooting device, the scene shooting device is determined as the target shooting device.

在一種可選實施方式中,所述預設半徑包含第二預設半徑,且第二預設半徑大於第一預設半徑;相應的,所述靶心圖表像確定模組330,還可以用於: 若場景拍攝設備與目標設備的距離介於第一預設半徑和第二預設半徑之間,或者,場景拍攝設備的視角資訊中不存在目標設備,則將場景拍攝設備作為輔助拍攝設備。 In an optional implementation, the preset radius includes a second preset radius, and the second preset radius is greater than the first preset radius; accordingly, the bullseye chart image determination module 330 can also be used to : If the distance between the scene shooting device and the target device is between the first preset radius and the second preset radius, or the target device does not exist in the perspective information of the scene shooting device, the scene shooting device is used as the auxiliary shooting device.

在一種可選實施方式中,所述圖像場景構造模組340,可以具體用於: 根據靶心圖表像和輔助拍攝設備所拍攝的輔助圖像,對圖像場景進行構造。 In an optional implementation, the image scene construction module 340 can be specifically used for: The image scene is constructed based on the bullseye chart image and the auxiliary image taken by the auxiliary shooting equipment.

在一種可選實施方式中,所述裝置300還可以包含 圖像場景展示模組,用於將圖像場景投射至目標設備進行展示。 In an optional implementation, the device 300 may also include The image scene display module is used to project the image scene to the target device for display.

本發明實施例所提供的圖像場景構造裝置可執行本發明任意實施例所提供的圖像場景構造方法,具備執行各圖像場景構造方法相應的功能模組和功效。 [實施例四] The image scene construction device provided by the embodiment of the present invention can execute the image scene construction method provided by any embodiment of the present invention, and has corresponding functional modules and functions for executing each image scene construction method. [Example 4]

圖4示出了可以用來實施本發明的實施例的電子設備10的結構示意圖。電子設備旨在表示各種形式的數位電腦,諸如,筆記型電腦、台式電腦、工作台、個人數位助理、伺服器、刀鋒伺服器、大型電腦、和其它適合的電腦。電子設備還可以表示各種形式的行動裝置,諸如,個人數位處理、行動電話、智慧型手機、可穿戴設備(如頭盔、眼鏡、手錶等)和其它類似的計算裝置。本說明書所示的部件、其等的連接和關係、以及其等的功能僅僅作為示例,並且不意在限制本說明書中描述的及/或要求的本發明的實現。FIG. 4 shows a schematic structural diagram of an electronic device 10 that can be used to implement embodiments of the present invention. Electronic devices are intended to mean various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blades, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital assistants, mobile phones, smartphones, wearable devices (such as helmets, glasses, watches, etc.) and other similar computing devices. The components shown in this specification, their connections and relationships, and their functions are merely examples and are not intended to limit the implementation of the invention described and/or claimed in this specification.

如圖4所示,電子設備10包含至少一個處理器11,以及與至少一個處理器11通訊連接的記憶體,如唯讀記憶體(Read-Only Memory,ROM)12、隨機存取記憶體(Random Access Memory,RAM)13等,其中,記憶體儲存有可被至少一個處理器執行的電腦程式,處理器11可以根據儲存在唯讀記憶體(ROM)12中的電腦程式或者從記憶單元18載入到隨機存取記憶體(RAM)13中的電腦程式,來執行各種適當的動作和處理。在隨機存取記憶體(RAM) 13中,還可儲存電子設備10操作所需的各種程式和資料。處理器11、唯讀記憶體(ROM) 12以及隨機存取記憶體(RAM) 13通過匯流排14彼此相連。輸入/輸出(Input/Output,I/O)介面15也連接至匯流排14。As shown in Figure 4, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a read-only memory (Read-Only Memory, ROM) 12, a random access memory ( Random Access Memory (RAM) 13, etc., in which the memory stores a computer program that can be executed by at least one processor. The processor 11 can operate according to the computer program stored in the read-only memory (ROM) 12 or from the memory unit 18 Computer programs loaded into random access memory (RAM) 13 to perform various appropriate actions and processes. Various programs and data required for the operation of the electronic device 10 can also be stored in the random access memory (RAM) 13 . The processor 11 , the read only memory (ROM) 12 and the random access memory (RAM) 13 are connected to each other through the bus 14 . An input/output (I/O) interface 15 is also connected to the bus 14 .

電子設備10中的多個部件連接至輸入/輸出(I/O)介面15,包含:輸入單元16,例如鍵盤、滑鼠等;輸出單元17,例如各種類型的顯示器、揚聲器等;記憶單元18,例如磁片、光碟等;以及通訊單元19,例如網卡、數據機、無線通訊收發機等。通訊單元19允許電子設備10通過諸如網際網路的電腦網路及/或各種電訊網路與其他設備交換資訊/資料。Multiple components in the electronic device 10 are connected to the input/output (I/O) interface 15, including: input unit 16, such as a keyboard, mouse, etc.; output unit 17, such as various types of displays, speakers, etc.; memory unit 18 , such as magnetic disks, optical discs, etc.; and communication unit 19, such as network cards, modems, wireless communication transceivers, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunications networks.

處理器11可以是各種具有處理和計算能力的通用及/或專用處理組件。處理器11的一些示例包含但不限於中央處理單元(Central Processing Unit,CPU)、圖形處理單元(Graphics Processing Unit,GPU)、各種專用的人工智慧(Artificial Intelligence,AI)計算晶片、各種運行機器學習模型演算法的處理器、數位訊號處理器(Digital Signal Processor,DSP)、以及任何適當的處理器、控制器、微控制器等。處理器11執行上文所描述的各個方法和處理,例如圖像場景構造方法。Processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various specialized artificial intelligence (Artificial Intelligence, AI) computing chips, and various types of machine learning Model algorithm processor, Digital Signal Processor (DSP), and any appropriate processor, controller, microcontroller, etc. The processor 11 executes various methods and processes described above, such as the image scene construction method.

在一些實施例中,圖像場景構造方法可被實現為電腦程式,其被有形地包含於電腦可讀儲存介質,例如記憶單元18。在一些實施例中,電腦程式的部分或者全部可以經由唯讀記憶體(ROM) 12及/或通訊單元19而被載入及/或安裝到電子設備10上。當電腦程式載入到隨機存取記憶體(RAM) 13並由處理器11執行時,可以執行上文描述的圖像場景構造方法的一個或多個步驟。備選地,在其他實施例中,處理器11可以通過其他任何適當的方式(例如,借助於韌體)而被配置為執行圖像場景構造方法。In some embodiments, the image scene construction method can be implemented as a computer program, which is tangibly included in a computer-readable storage medium, such as the memory unit 18 . In some embodiments, part or all of the computer program may be loaded and/or installed on the electronic device 10 via the read-only memory (ROM) 12 and/or the communication unit 19 . When the computer program is loaded into the random access memory (RAM) 13 and executed by the processor 11, one or more steps of the image scene construction method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the image scene construction method in any other suitable manner (eg, by means of firmware).

本說明書中以上描述的系統及技術的各種實施方式可以在數位電子電路系統、積體電路系統、現場可程式化邏輯閘陣列(Field Programmable Gate Array,FPGA)、特殊應用積體電路(Application Specific Integrated Circuit,ASIC)、專用標準產品(Application Specific Standard Product,ASSP)、單晶片系統(System on Chip,SOC)、複雜可程式化邏輯裝置(Complex Programmable Logic Device,CPLD)、電腦硬體、韌體、軟體、及/或其等之組合中實現。這些各種實施方式可以包含:實施在一個或者多個電腦程式中,該一個或者多個電腦程式可在包含至少一個可程式設計處理器的可程式設計系統上執行及/或解釋,該可程式設計處理器可以是專用或通用可程式設計處理器,可以從儲存系統、至少一個輸入裝置、和至少一個輸出裝置接收資料和指令,並且將資料和指令傳輸至該儲存系統、該至少一個輸入裝置、和該至少一個輸出裝置。Various implementations of the systems and technologies described above in this specification can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (Field Programmable Gate Array, FPGA), application specific integrated circuits (Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), computer hardware, firmware, implemented in software, and/or combinations thereof. These various embodiments may include implementation in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable The processor may be a special purpose or general purpose programmable processor that may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.

用於實施本發明的方法的電腦程式可以採用一個或多個程式設計語言的任何組合來編寫。這些電腦程式可以提供給通用電腦、專用電腦或其他可程式設計資料處理裝置的處理器,使得電腦程式當由處理器執行時使流程圖及/或方塊圖中所規定的功能/操作被實施。電腦程式可以完全在機器上執行、部分地在機器上執行,作為獨立套裝軟體部分地在機器上執行且部分地在遠端機器上執行或完全在遠端機器或伺服器上執行。Computer programs for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs can be provided to the processor of a general-purpose computer, a special-purpose computer or other programmable data processing device, so that when the computer program is executed by the processor, the functions/operations specified in the flowchart and/or block diagram are implemented. A computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on a remote machine or server.

在本發明的上下文中,電腦可讀儲存介質可以是有形的介質,其可以包含或儲存以供指令執行系統、裝置或設備使用或與指令執行系統、裝置或設備結合地使用的電腦程式。電腦可讀儲存介質可以包含但不限於電子的、磁性的、光學的、電磁的、紅外線的、或半導體系統、裝置或設備,或者上述內容的任何合適組合。備選地,電腦可讀儲存介質可以是機器可讀訊號介質。機器可讀儲存介質的更具體示例會包含基於一個或多個線的電氣連接、攜帶式電腦硬碟、硬碟、隨機存取記憶體(RAM)、唯讀記憶體(ROM)、可抹除可程式唯讀記憶體(Erasable Programmable Read-Only Memory,EPROM)或快閃記憶體、光纖、光碟唯讀記憶體(Compact Disc Read-Only Memory,CD-ROM)、光學儲存設備、磁性儲存設備、或上述內容的任何合適組合。In the context of this disclosure, a computer-readable storage medium may be a tangible medium that may contain or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. Computer-readable storage media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing. Alternatively, the computer-readable storage medium may be a machine-readable signal medium. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer hard drives, hard drives, random access memory (RAM), read only memory (ROM), removable Erasable Programmable Read-Only Memory (EPROM) or flash memory, optical fiber, Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.

為了提供與使用者的交互,可以在電子設備上實施此處描述的系統和技術,該電子設備具有:用於向使用者顯示資訊的顯示裝置(例如,陰極射線管(Cathode Ray Tube,CRT)或者液晶顯示器(Liquid Crystal Display,LCD)監視器);以及鍵盤和指向裝置(例如,滑鼠或者軌跡球),使用者可以通過該鍵盤和該指向裝置來將輸入提供給電子設備。其它種類的裝置還可以用於提供與使用者的交互;例如,提供給使用者的回饋可以是任何形式的感測回饋(例如,視覺回饋、聽覺回饋、或者觸覺回饋);並且可以用任何形式(包含聲輸入、語音輸入或者、觸覺輸入)來接收來自使用者的輸入。To provide interaction with a user, the systems and techniques described herein may be implemented on an electronic device having a display device (eg, a cathode ray tube (CRT)) for displaying information to the user or a Liquid Crystal Display (LCD) monitor); and a keyboard and pointing device (for example, a mouse or a trackball) through which a user can provide input to the electronic device. Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and may be in any form. (including acoustic input, voice input or tactile input) to receive input from the user.

可以將此處描述的系統和技術實施在包含後台部件的計算系統(例如,作為資料伺服器)、或包含中介軟體部件的計算系統(例如,應用伺服器)、或者包含前端部件的計算系統(例如,具有圖形化使用者介面或者網路流覽器的使用者電腦,使用者可以通過該圖形化使用者介面或者該網路流覽器來與此處描述的系統和技術的實施方式交互)、或者包含這種後台部件、中介軟體部件、或者前端部件的任何組合的計算系統中。可以通過任何形式或者介質的數位資料通訊(例如,通訊網路)來將系統的部件相互連接。通訊網路的示例包含:區域網路(Local Area Network,LAN)、廣域網路(Wide Area Network,WAN)、區塊鏈網路和網際網路。The systems and techniques described herein may be implemented on a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a data server). For example, a user's computer with a graphical user interface or web browser through which the user can interact with implementations of the systems and technologies described herein) , or a computing system containing any combination of such back-end components, middleware components, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN), blockchain network, and the Internet.

計算系統可以包含使用者端和伺服器。使用者端和伺服器一般遠離彼此並且通常通過通訊網路進行交互。通過在相應的電腦上運行並且彼此具有使用者端-伺服器關係的電腦程式來產生使用者端和伺服器的關係。伺服器可以是雲端伺服器,又稱為雲端計算伺服器或雲端主機,是雲端計算服務體系中的一項主機產品,以解決了傳統物理主機與VPS服務中,存在的管理難度大,業務擴展性弱的缺陷。Computing systems can include clients and servers. Clients and servers are generally remote from each other and usually interact over a communications network. A client and server relationship is created by computer programs running on corresponding computers and having a client-server relationship with each other. The server can be a cloud server, also known as cloud computing server or cloud host. It is a host product in the cloud computing service system to solve the difficult management and business expansion problems in traditional physical hosts and VPS services. The defect of sexual weakness.

應該理解,可以使用如上所示之各種形式的流程,重新排序、增加或刪除步驟。例如,本發明中記載的各步驟可以並行地執行亦可順序地執行也可以不同的次序執行,只要能夠實現本發明的技術手段所期望的結果,本說明書在此不進行限制。It should be understood that various forms of the process shown above may be used, with steps reordered, added or deleted. For example, each step described in the present invention can be executed in parallel, sequentially, or in a different order. As long as the desired results of the technical means of the present invention can be achieved, this specification is not limiting.

上述具體實施方式,並不構成對本發明保護範圍的限制。所屬技術領域中具有通常知識者應該明白的是,根據設計要求和其他因素,可以進行各種修改、組合、子組合和替代。任何在本發明的精神和原則之內所作的修改、均等替換和改進等,均應包含在本發明保護範圍之內。The above-mentioned specific embodiments do not constitute a limitation on the scope of the present invention. It will be understood by those of ordinary skill in the art that various modifications, combinations, sub-combinations and substitutions are possible depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention shall be included in the protection scope of the present invention.

本發明要求在2022年11月25日提交中國專利局、申請號為202211493520.1的中國專利申請的優先權,以上申請的全部內容通過引用結合在本發明中。The present invention claims priority to the Chinese patent application with application number 202211493520.1, which was submitted to the China Patent Office on November 25, 2022. The entire content of the above application is incorporated into the present invention by reference.

10:電子設備 11:處理器 12:唯讀記憶體(ROM) 13:隨機存取記憶體(RAM) 14:匯流排 15:輸入/輸出(I/O)介面 16:輸入單元 17:輸出單元 18:記憶單元 19:通訊單元 300:裝置 310:定位資訊獲取模組 320:拍攝設備確定模組 330:靶心圖表像確定模組 340:圖像場景構造模組 10: Electronic equipment 11: Processor 12: Read-only memory (ROM) 13: Random access memory (RAM) 14:Bus 15: Input/output (I/O) interface 16:Input unit 17:Output unit 18:Memory unit 19: Communication unit 300:Device 310: Positioning information acquisition module 320: Shooting equipment determination module 330: Bullseye chart image determination module 340:Image scene construction module

為了更清楚地說明本發明實施例中的技術手段,下面將對實施例描述中所需要使用的圖式作簡單地介紹,顯而易見地,下面描述中的圖式僅僅是本發明的一些實施例,對於所屬技術領域中具有通常知識者來講,在不付出創造性勞動的前提下,還可以根據這些圖式獲得其他的圖式。 〔圖1〕是根據本發明實施例一提供的一種圖像場景構造方法的流程圖。 〔圖2〕是根據本發明實施例二所適用的拍攝設備的示意圖。 〔圖3〕是根據本發明實施例三提供的一種圖像場景構造裝置的結構示意圖。 〔圖4〕是實現本發明實施例的圖像場景構造方法的電子設備的結構示意圖。 In order to more clearly illustrate the technical means in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present invention. For those with general knowledge in the relevant technical field, other schemas can be obtained based on these schemas without exerting creative work. [Fig. 1] is a flow chart of an image scene construction method provided according to Embodiment 1 of the present invention. [Fig. 2] is a schematic diagram of a photographing device applicable to Embodiment 2 of the present invention. [Fig. 3] is a schematic structural diagram of an image scene construction device provided according to Embodiment 3 of the present invention. [Fig. 4] is a schematic structural diagram of an electronic device that implements the image scene construction method according to the embodiment of the present invention.

S110、S120、S130、S140:步驟 S110, S120, S130, S140: steps

Claims (10)

一種圖像場景構造方法,應用於電子設備,其特徵係包含:獲取目標設備的目標定位資訊;根據該目標定位資訊,確定以該目標定位資訊為中心的預設半徑範圍內的場景拍攝設備;根據該場景拍攝設備的位置和視角資訊,確定靶心圖表像;根據該靶心圖表像,對圖像場景進行構造;其中,該圖像場景為擴增現實場景;其中,該目標設備是亟待進行第三人稱三維場景展示的設備,該場景拍攝設備是進行圖像拍攝或視頻錄製的硬體設備。 An image scene construction method, applied to electronic equipment, characterized by: obtaining target positioning information of a target device; determining scene shooting equipment within a preset radius centered on the target positioning information based on the target positioning information; According to the position and perspective information of the scene shooting device, a bull's-eye diagram image is determined; according to the bull's-eye diagram image, an image scene is constructed; wherein the image scene is an augmented reality scene; wherein the target device is an urgent need for the third A device for displaying three-dimensional three-dimensional scenes. The scene shooting device is a hardware device for image shooting or video recording. 如請求項1所述之方法,其中,該預設半徑包含第一預設半徑;相應的,該根據該場景拍攝設備的位置和視角資訊,確定靶心圖表像,包含:根據該場景拍攝設備的位置和視角資訊,從該第一預設半徑範圍中篩選出目標拍攝設備;當該目標拍攝設備的數量符合預設數量閾值時,將該目標拍攝設備所拍攝的圖像,作為靶心圖表像。 The method of claim 1, wherein the preset radius includes a first preset radius; correspondingly, determining the bullseye chart image according to the position and perspective information of the scene shooting device includes: according to the scene shooting device Position and angle information are used to filter out the target shooting equipment from the first preset radius range; when the number of the target shooting equipment meets the preset quantity threshold, the image taken by the target shooting equipment is used as a bullseye chart image. 如請求項2所述之方法,其中,該根據該場景拍攝設備的位置和視角資訊,從該第一預設半徑範圍中篩選出目標拍攝設備,包含:若該場景拍攝設備距離該目標設備在該第一預設半徑之內,並且,該場景拍攝設備的該視角資訊中存在該目標設備,則將該場景拍攝設備確定為該目標拍攝設備。 The method as described in claim 2, wherein the target shooting device is selected from the first preset radius range according to the position and perspective information of the scene shooting device, including: if the scene shooting device is within a distance of the target device Within the first preset radius, and if the target device exists in the perspective information of the scene shooting device, the scene shooting device is determined to be the target shooting device. 如請求項2所述之方法,其中,該根據該場景拍攝設備的位置和視 角資訊,從該第一預設半徑範圍中篩選出目標拍攝設備,包含:若該場景拍攝設備距離該目標設備為該第一預設半徑,並且,該場景拍攝設備的該視角資訊中存在該目標設備,則將該場景拍攝設備確定為該目標拍攝設備。 The method as described in claim 2, wherein the position and view of the shooting device according to the scene Angle information, filtering out the target shooting device from the first preset radius range, including: if the scene shooting device is distanced from the target device by the first preset radius, and the angle information of the scene shooting device contains the target device, the scene shooting device is determined as the target shooting device. 如請求項2至4中任一項所述之方法,其中,該預設半徑包含第二預設半徑,且該第二預設半徑大於該第一預設半徑;相應的,在該根據該場景拍攝設備的位置和視角資訊,確定靶心圖表像之後,還包含:若該場景拍攝設備與該目標設備的距離介於該第一預設半徑和該第二預設半徑之間,或者,該場景拍攝設備的該視角資訊中不存在該目標設備,則將該場景拍攝設備作為輔助拍攝設備。 The method according to any one of claims 2 to 4, wherein the preset radius includes a second preset radius, and the second preset radius is larger than the first preset radius; accordingly, in the method according to the The position and angle information of the scene shooting device, after determining the bullseye chart image, also includes: if the distance between the scene shooting device and the target device is between the first preset radius and the second preset radius, or, the If the target device does not exist in the perspective information of the scene shooting device, the scene shooting device will be used as the auxiliary shooting device. 如請求項5所述之方法,其中,該根據該靶心圖表像,對圖像場景進行構造,包含:根據該靶心圖表像和該輔助拍攝設備所拍攝的輔助圖像,對圖像場景進行構造。 The method of claim 5, wherein constructing the image scene based on the bullseye chart image includes: constructing the image scene based on the bullseye chart image and the auxiliary image captured by the auxiliary shooting device . 如請求項1至4中任一項所述之方法,在該根據該靶心圖表像,對圖像場景進行構造之後,該方法還包含:將該圖像場景投射至該目標設備進行展示。 According to the method described in any one of claims 1 to 4, after the image scene is constructed based on the bullseye chart image, the method further includes: projecting the image scene to the target device for display. 一種圖像場景構造裝置,其特徵係包含:定位資訊獲取模組,用於獲取目標設備的目標定位資訊;拍攝設備確定模組,用於根據該目標定位資訊,確定以該目標定位資訊為中心的預設半徑範圍內的場景拍攝設備;靶心圖表像確定模組,用於根據該場景拍攝設備的位置和視角資訊,確定靶心 圖表像;圖像場景構造模組,用於根據該靶心圖表像,對圖像場景進行構造;其中,該圖像場景為擴增現實場景;其中,該目標設備是亟待進行第三人稱三維場景展示的設備,該場景拍攝設備是進行圖像拍攝或視頻錄製的硬體設備。 An image scene construction device, which is characterized by including: a positioning information acquisition module, used to obtain the target positioning information of a target device; a shooting device determination module, used to determine the target positioning information as the center based on the target positioning information scene shooting equipment within a preset radius; the bullseye chart image determination module is used to determine the bullseye based on the location and perspective information of the scene shooting equipment chart image; the image scene construction module is used to construct an image scene based on the bullseye chart image; wherein the image scene is an augmented reality scene; wherein the target device is an urgently needed third-person three-dimensional scene The equipment shown is a hardware device for image shooting or video recording. 一種電子設備,其特徵係包含:至少一個處理器;以及與該至少一個處理器通訊連接的記憶體;其中,該記憶體儲存有可被該至少一個處理器執行的電腦程式,該電腦程式被該至少一個處理器執行,以使該至少一個處理器能夠執行請求項1至7中任一項所述之圖像場景構造方法。 An electronic device, characterized by comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores a computer program that can be executed by the at least one processor, and the computer program is The at least one processor executes, so that the at least one processor can execute the image scene construction method described in any one of claims 1 to 7. 一種電腦可讀儲存介質,該電腦可讀儲存介質儲存有電腦指令,該電腦指令用於使處理器執行時實現請求項1至7中任一項所述之圖像場景構造方法。 A computer-readable storage medium stores computer instructions, and the computer instructions are used to enable the processor to implement the image scene construction method described in any one of claims 1 to 7 when executed.
TW112102524A 2022-11-25 2023-01-19 Image scene construction method, apparatus, electronic equipment and storage medium TWI833560B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211493520.1 2022-11-25
CN202211493520.1A CN115713614A (en) 2022-11-25 2022-11-25 Image scene construction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
TWI833560B true TWI833560B (en) 2024-02-21
TW202422478A TW202422478A (en) 2024-06-01

Family

ID=85234811

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112102524A TWI833560B (en) 2022-11-25 2023-01-19 Image scene construction method, apparatus, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115713614A (en)
TW (1) TWI833560B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI526992B (en) * 2015-01-21 2016-03-21 國立清華大學 Method for optimizing occlusion in augmented reality based on depth camera
TWI659335B (en) * 2017-05-25 2019-05-11 大陸商騰訊科技(深圳)有限公司 Graphic processing method and device, virtual reality system, computer storage medium
CN110132242A (en) * 2018-02-09 2019-08-16 驭势科技(北京)有限公司 Multiple-camera positions and the Triangulation Algorithm and its movable body of map structuring immediately
CN110585704A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Object prompting method, device, equipment and storage medium in virtual scene
TW202203651A (en) * 2020-05-13 2022-01-16 新加坡商聯發科技(新加坡)私人有限公司 Methods and apparatus for signaling viewing regions of various types in immersive media
TW202240431A (en) * 2021-03-10 2022-10-16 美商高通公司 Object collision data for virtual camera in virtual interactive scene defined by streamed media data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI526992B (en) * 2015-01-21 2016-03-21 國立清華大學 Method for optimizing occlusion in augmented reality based on depth camera
TWI659335B (en) * 2017-05-25 2019-05-11 大陸商騰訊科技(深圳)有限公司 Graphic processing method and device, virtual reality system, computer storage medium
CN110132242A (en) * 2018-02-09 2019-08-16 驭势科技(北京)有限公司 Multiple-camera positions and the Triangulation Algorithm and its movable body of map structuring immediately
CN110585704A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Object prompting method, device, equipment and storage medium in virtual scene
TW202203651A (en) * 2020-05-13 2022-01-16 新加坡商聯發科技(新加坡)私人有限公司 Methods and apparatus for signaling viewing regions of various types in immersive media
TW202240431A (en) * 2021-03-10 2022-10-16 美商高通公司 Object collision data for virtual camera in virtual interactive scene defined by streamed media data

Also Published As

Publication number Publication date
CN115713614A (en) 2023-02-24

Similar Documents

Publication Publication Date Title
CN112053446B (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN106251399B (en) A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam
US20120293613A1 (en) System and method for capturing and editing panoramic images
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
CN108053473A (en) A kind of processing method of interior three-dimensional modeling data
GB2591857A (en) Photographing-based 3D modeling system and method, and automatic 3D modeling apparatus and method
WO2023280038A1 (en) Method for constructing three-dimensional real-scene model, and related apparatus
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN112927362A (en) Map reconstruction method and device, computer readable medium and electronic device
WO2021093679A1 (en) Visual positioning method and device
CN112740261A (en) Panoramic light field capture, processing and display
WO2023226370A1 (en) Three-dimensional reproduction method and system for target object
CN112270702A (en) Volume measurement method and device, computer readable medium and electronic equipment
CN115690382A (en) Training method of deep learning model, and method and device for generating panorama
JP2019022151A (en) Information processing apparatus, image processing system, control method, and program
CN110296686A (en) Localization method, device and the equipment of view-based access control model
WO2023169281A1 (en) Image registration method and apparatus, storage medium, and electronic device
CN113253842A (en) Scene editing method and related device and equipment
TW202244680A (en) Pose acquisition method, electronic equipment and storage medium
CN111083368A (en) Simulation physics cloud platform panoramic video display system based on high in clouds
CN114882106A (en) Pose determination method and device, equipment and medium
WO2022126921A1 (en) Panoramic picture detection method and device, terminal, and storage medium
WO2021217403A1 (en) Method and apparatus for controlling movable platform, and device and storage medium
TWI833560B (en) Image scene construction method, apparatus, electronic equipment and storage medium
CN213126248U (en) Intelligent interaction system for metro vehicle section construction site and BIM scene