WO2014183533A1 - 一种图像处理方法、用户终端、图像处理终端及系统 - Google Patents
一种图像处理方法、用户终端、图像处理终端及系统 Download PDFInfo
- Publication number
- WO2014183533A1 WO2014183533A1 PCT/CN2014/075687 CN2014075687W WO2014183533A1 WO 2014183533 A1 WO2014183533 A1 WO 2014183533A1 CN 2014075687 W CN2014075687 W CN 2014075687W WO 2014183533 A1 WO2014183533 A1 WO 2014183533A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image processing
- user terminal
- multimedia data
- terminal
- resources
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 123
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 48
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 48
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000005516 engineering process Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000002194 synthesizing effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4318—Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
Definitions
- Image processing method user terminal, image processing terminal and system
- the present invention relates to the field of multimedia communication technologies, and in particular, to an image processing method, a user terminal, an image processing terminal, and an image processing system. Background technique
- a plurality of cameras are used as acquisition devices to perform image acquisition on a subject to generate 3D resources.
- resource producers such as TV stations are required to adjust the parameters of the camera and the position of the camera in advance.
- an embodiment of the present invention provides an image processing method, a user terminal, an image processing terminal, and an image processing system, which can quickly and stably obtain 3D resources without high production cost.
- An image processing method the method includes:
- the user terminal reports acquisition parameters and multimedia data resources based on different viewpoints; the user terminal is any user terminal whose location is not fixed;
- the user terminal receives the image processing result of the multi-view synthesis obtained from the acquisition parameter and the multimedia data resource.
- the acquisition parameter comprises: at least one of a shooting parameter and a shooting position.
- the method further includes: receiving an image processing result of the multi-view synthesis obtained according to the collection parameter and the multimedia data resource, synthesizing from the multi-view according to a display parameter configuration or a hardware operation configuration of the user terminal The image processing result is selected for display.
- An image processing method comprising:
- the image processing terminal receives the collection parameter and the multimedia data resource based on different viewpoints; the image processing terminal obtains the image processing result of the multi-view synthesis according to the collection parameter and the multimedia data resource.
- the image processing terminal obtains the image processing result of the multi-view synthesis according to the collection parameter and the multimedia data resource, and includes:
- the image processing terminal establishes reference coordinates and reference resources
- the image processing terminal compares the established reference coordinates and reference resources with the collection parameters and the multimedia data resources, groups the multimedia data resources, and selects from the reference coordinates and reference resources Packet data
- the image processing terminal performs data synthesis of the respective viewpoints on the packet data, and obtains a plurality of different viewpoint data sets as image processing results of multi-view synthesis.
- a user terminal includes:
- the reporting unit is configured to report the collection parameters and the multimedia data resources based on the different viewpoints;
- the first receiving unit is configured to receive the collection data according to the collection parameter and the multimedia data resource The obtained image processing result of multi-view synthesis;
- the user terminal is any user terminal whose location is not fixed.
- the acquisition parameter comprises: at least one of a shooting parameter and a shooting position.
- the user terminal further includes:
- a display unit configured to receive the image processing result of the multi-view synthesis obtained according to the collection parameter and the multimedia data resource, and the image processing result synthesized from the multi-viewpoint according to a display parameter configuration or a hardware operation configuration of the user terminal Select one to display.
- the reporting unit and the first receiving unit use a central processing unit when performing processing
- CPU Central Processing Unit
- DSP Digital Singnal Processor
- programmable array ⁇ 'J FPGA, Field - Programmable Gate Array
- An image processing terminal comprising:
- a second receiving unit configured to receive the collection parameter and the multimedia data resource based on different viewpoints
- An image processing unit configured to obtain an image processing result of multi-view synthesis according to the collection parameter and the multimedia data resource;
- a sending unit configured to send the image processing result.
- the image processing unit further includes:
- Aligning the subunits configured to compare the established reference coordinates and reference resources with the acquisition parameters and the multimedia data resources, group the multimedia data resources, and select and reference the reference coordinates and references therefrom Packet data matching the resources;
- the synthesis subunit is configured to perform data synthesis of the respective viewpoints on the packet data, to obtain a plurality of different viewpoint data sets and as a result of image processing of multi-view synthesis.
- the second receiving unit, the image processing unit, the sending unit, and the creator The unit, the comparison subunit, and the synthesis subunit adopt a central processing unit when performing processing
- CPU Central Processing Unit
- DSP Digital Signal Processor
- FPGA Field-Programmable Gate Array
- An image processing system comprising: a user terminal and an image processing terminal;
- the user terminal is configured to report acquisition parameters and multimedia data resources based on different viewpoints; receive image processing results of multi-view synthesis obtained according to the collection parameters and the multimedia data resource; the user terminal is not fixed in position Any user terminal;
- the image processing terminal is configured to receive acquisition parameters and multimedia data resources based on different viewpoints; obtain image processing results of multi-view synthesis according to the collection parameters and the multimedia data resource; and send the image processing result.
- the user terminal may be the user terminal according to any one of the above items;
- the image processing terminal may be the image processing terminal according to any one of the above.
- the image processing method of the embodiment of the present invention includes: the user terminal reporting the collection parameter and the multimedia data resource based on different viewpoints; the user terminal is any user terminal whose location is not fixed; the user terminal receives the multimedia parameter according to the collection parameter and the multimedia Image processing results of multi-view synthesis obtained from data resources. Since the user terminal is any user terminal whose location is not fixed, the user who uses the user terminal can be called a free user, and the data resources provided by the free user do not need to adjust the parameters of the camera, the position of the camera, etc. in advance. In the end, 3D resources can be obtained quickly and steadily without expensive production costs.
- FIG. 1 is a schematic diagram of image acquisition in a 3D image capturing scene in the prior art
- FIG. 2 is a flowchart of implementing an image processing method according to an embodiment of the present invention.
- FIG. 3 is a flowchart of implementing an image processing method according to an embodiment of the present invention.
- 4 is a flowchart of a free view synthesis scenario in which an embodiment of the present invention is applied;
- FIG. 5 is a schematic diagram of a data format to which an embodiment of the present invention is applied;
- FIG. 6 is a schematic diagram of a data format to which an embodiment of the present invention is applied.
- DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The implementation of the technical solution will be further described in detail below with reference to the accompanying drawings.
- the image processing method of the embodiment of the present invention includes the following steps: Step 101: The user terminal reports acquisition parameters and multimedia data resources based on different viewpoints; the user terminal is any user terminal whose location is not fixed.
- the acquisition parameters and multimedia data resources based on different viewpoints are for the same shooting scene.
- the multimedia data resource includes: various multimedia data resources including text, image, video, and the like.
- the user terminal is any user terminal whose location is not fixed, and a user who uses such a user terminal may be referred to as a free user.
- Step 102 The user terminal receives the image processing result of the multi-view synthesis obtained according to the collection parameter and the multimedia data resource.
- the user terminal that receives the synthesized image processing result in step 102 and the user terminal that uploads the source data in step 101 may be different terminals, or may be the same terminal.
- the collecting parameter includes: at least one of a shooting parameter and a shooting position when the user terminal collects the collected object.
- the shooting parameters include: shooting quality, shooting angle, and the like.
- the method further includes: receiving image processing results of multi-view synthesis obtained according to the collection parameter and the multimedia data resource, according to display parameter configuration or hardware of the user terminal Running a configuration image processing synthesized from the multi-viewpoint Select to display in the results.
- a user terminal can select one or more of the results for display use.
- the display parameter is configured as a display resolution
- the hardware running configuration may be configured for parameters such as the terminal 5 end running speed and the resource occupation, thereby being configured according to parameters such as display resolution, terminal operating speed, and resource occupation. Select a result for display, which is necessarily the best display.
- the image processing method of the embodiment of the present invention includes the following steps: Step 201: The image processing terminal receives acquisition parameters and multimedia data based on different viewpoints.
- Step 220022 the image image processing end of the processing terminal terminal root according to the description of the collection of the collection parameter parameters and the multi-media media data source resource source to obtain a plurality of viewpoints
- the result of processing the result of the image image formed by the point combination is sent to the image of the image of the image to be processed.
- the needle is for the step 220022, and it is necessary to obtain a plurality of viewpoints and the image of the image is processed. After that, the result of the processing may be sent to the terminal end of the user terminal, or the image of the image may be provided only for the user.
- the result of the processing is used for the subsequent use, that is, the user can be sent to the user in the temporary image without sending the image.
- the basis of the image processing based on the image processing terminal end of the final end of the processing results of the results of the use of the package includes several kinds of situation: 11))
- One kind is that the image of the image is processed at the end of the terminal, and, for example, at this time, the image of the image is processed as a server, and the service is served by the service.
- the server provides a link for connecting one by one, and sends and sends a request to the end terminal of the user terminal to obtain the result of processing the image of the image of the image; 22)) All kinds of images are processed at the end of the image, and, for example, at this time, the image image processing terminal is a server, and the server is directly
- the sending and receiving is sent to the user terminal 2200 terminal terminal, as in the case of step 220022; 33)) there is still a possibility that it can be stored in one place.
- the local party may be operated by the inquiry or other operations. .
- the preferred embodiment of the embodiment is implemented, and the image of the image in the step 220022 is processed according to the data.
- the image processing terminal compares the established reference coordinates and reference resources with the collection parameters and the multimedia data resources, groups the multimedia data resources, and selects from the reference coordinates and reference resources Packet data
- Data synthesis of each viewpoint is performed on the packet data, and a plurality of different viewpoint data sets are obtained and used as image processing results of multi-view synthesis.
- the user terminal of the embodiment of the present invention includes: a reporting unit, configured to report an acquisition parameter and a multimedia data resource based on different viewpoints; and a first receiving unit, configured to receive a multi-viewpoint obtained according to the collection parameter and the multimedia data resource Synthetic image processing results.
- the user terminal is any user terminal whose location is not fixed.
- the acquiring parameter includes: at least one of a shooting parameter and a shooting position.
- the user terminal further includes: a display unit, configured to receive an image processing result of the multi-view synthesis obtained according to the collection parameter and the multimedia data resource, according to the user terminal
- the display parameter configuration or the hardware operation configuration selects one of the image processing results synthesized by the multi-viewpoint for display.
- the image processing terminal of the embodiment of the present invention may also be referred to as a focus device, and the focus device may be a user terminal with high performance or a server.
- the image processing terminal includes: a second receiving unit, configured to receive acquisition parameters and multimedia data resources based on different viewpoints; and an image processing unit, configured to And the multimedia data resource is obtained by the image processing result of the multi-view synthesis; and the sending unit is configured to send the image processing result.
- the image processing unit further includes: a subunit for establishing reference coordinates and a reference resource; a comparison subunit, configured to use the reference coordinate and the reference resource to be established, and Comparing the acquisition parameter with the multimedia data resource, grouping the multimedia data resource, and selecting the reference coordinate and the reference resource therefrom Matching packet data; a synthesizing subunit, configured to perform data synthesizing of the respective viewpoints on the packet data, to obtain a plurality of different viewpoint data sets and as an image processing result of multi-view synthesis.
- An image processing system includes the above user terminal and the above image processing terminal.
- the source of the data resource in the embodiment of the present invention is not limited to a fixed user, and the data resources are actively provided by the free users.
- the present invention is not limited to the 3D application scenario, and covers various scenarios of image processing. For the convenience of description, the following takes a 3D application scenario as an example. Describe.
- a resource producer such as a television station is required, and the parameters of the camera, the position of the camera, etc. are adjusted in advance, that is, the currently required 3D image resource needs to be photographed.
- a certain number of cameras are placed around the camera according to certain requirements.
- the camera parameters are recorded before the shooting.
- the position of the camera is placed at different angles and heights according to the viewpoint, and the data resources are obtained.
- the data resources are multimedia data resources.
- the resource producer processes the 3D view of the different viewpoints according to the data from different locations and their corresponding camera parameters, and finally presents the program to the user through the program.
- How to get data resources, and the prior art is a lot of time and cost due to fixed users, fixed location and preparation.
- the free user can also collect the object to be photographed. Since it is a free user, it is not necessary to set the camera position beforehand, and presetting the camera parameters can save a lot of time and cost. Moreover, since the data resources are provided by free users, image processing is a free viewpoint.
- Synthetic service plan that is, through a large number of terminal devices with related sensors, carrying relevant acquisition parameters (including camera parameter information, position information, etc.) in the captured image, and handing the relevant image resources to the focus device for processing , the focus device will be based on a coordinate system
- the image resource information is judged and grouped, and the information is synthesized according to the detection result to form data of different viewpoints, and finally integrated into 3D data of a plurality of viewpoints, which can be used by different users.
- the biggest feature of this scheme is that the source of the data is free. That is, for the same scene, many users can use their own terminal equipment to obtain images at different angles through the service, and these image data can be synthesized by viewpoints to obtain data with different angles. Real, so it is called a free-view synthesis business solution.
- the embodiment of the present invention is directed to a 3D application scenario, which is a multimedia 3D multi-view synthesis service solution, including: 1) each free user sends an image or video resource on the device to the focus device, and the focus device can It is a high-performance terminal device or a server device. 2)
- the focus device performs grouping processing on the obtained image or video resource according to the set coordinate evaluation system to determine the optimal image (so-called optimal). It not only includes the quality of the captured image, but also the angle of the shooting, etc.), selects the optimal image of different viewpoints for view synthesis, forms image or video data of different viewpoints, and provides the synthesized data to each user.
- Solution 1 An implementation scheme in the 3D application scenario: The scheme obtains a view synthesis 3D image or 3D video data by using a data resource provided by a free user, and the solution includes user 1, user 2, ... user N. User 1, User 2, ... The device used by User N is used as each terminal device for collecting data resources, and the focus device is for performing multi-view synthesis image on the collected data resources collected by each terminal device. Processing, as shown in FIG. 4, includes the following steps: Step 301: Each user captures a corresponding object through a respective device to obtain a corresponding image or video resource, where the camera needs to carry camera parameters and position parameters, and then, the focus is set. The information provided by the user is exchanged, and the corresponding storage space is provided. Each user (user 1, user 2, ... user N) reports the data resource to the focus device for processing.
- the data resource includes: a multimedia data resource such as a captured image or a video resource; and when the data resource is reported, the parameter information, the location information, and the like of the user camera may also be carried at the same time.
- a multimedia data resource such as a captured image or a video resource
- Step 302 The focus device performs image processing of the view synthesis on the reported data resource.
- the step 302 includes: 1) the focus device sets the selected reference coordinates and the reference resource, and sets the parameter information; 2) the focus device passes the set parameter information and the obtained image or video resource, according to the parameter information in the resource.
- the parameter information set in the focus device is compared and determined, the resources are grouped, and the resources are optimized and selected; 3) the images are sorted and grouped according to the conditions to perform view synthesis image processing, that is, the focus device optimizes the data selection.
- the focus device performs data synthesis of the respective viewpoints on the packet data to form different sets of viewpoint data.
- Step 303 The focus device stores the 3D data synthesized by the viewpoint, and generates corresponding information, and provides the information to the user.
- the user provides the image or video resource of the captured object to the focus device through the network, and the focus device distinguishes the obtained data resource according to the set condition, sorts and selects the appropriate data resource for view synthesis, and forms different
- the synthesized 3D data of the viewpoint is finally provided to the user.
- Solution 2 An implementation solution in the 3D application scenario: The solution is to use the 3D data resources provided by the free user to obtain the template resources used by the view synthesis 3D image or 3D video data to form augmented reality.
- Step 401 Each user photographs a corresponding object through a respective device to obtain a corresponding image or video resource, where the camera needs to carry a camera parameter and a position parameter, etc., and then the focus device provides information for interacting with the user, such as a network link. Exchange and provide corresponding storage space, each user (user 1, user 2, ... user N) reports data resources to focus Equipment processing.
- the data resources include: captured image or video resources, parameter information of the user camera, location information, and the like.
- Step 402 The focus device performs image processing of the view synthesis on the reported data resource.
- the step 402 includes: 1) the focus device sets the selected reference coordinates and the reference resource, and sets the parameter information; 2) the focus device passes the set parameter information and the obtained image or video resource according to the parameter information in the resource.
- the parameter information set in the focus device is compared and determined, the resources are grouped, and the resources are optimized and selected; 3) the images are sorted and grouped according to the conditions to perform view synthesis image processing, that is, the focus device optimizes the data selection.
- the focus device performs data synthesis of the respective viewpoints on the packet data to form different sets of viewpoint data.
- Step 403 Reprocess the view synthesis data according to the service requirement to form a template data resource of the augmented reality service.
- Step 404 The focus device stores the view synthesis data and generates corresponding information for providing to the user. Users can enjoy the convenience brought by the augmented reality business by using the template data resource.
- the template data resource for example, in the above scheme 1, only a basic information is provided, and it can be said that all the information is recorded, but the processing cannot be selected, for example, the information of a Champions League field is obtained, including Courses, players, etc., can not be selected and modified later.
- the viewpoint synthesis can only be carried out with changes, and different results are obtained.
- the scheme provides a template data resource, which can select and process the recorded information. Can be modified later, such as shooting a graduates League information, including the stadium and players, etc., use this information as a template resource, if you change players at the stadium to play, you can change the data for post-processing.
- the data format of an image or video resource is to add corresponding camera parameters and corresponding shooting position parameters of the image or video resource in the data format.
- the camera parameters are provided by the camera on the shooting terminal, and the position parameters can be Obtained by the position sensor device on the shooting terminal, such as electronic compass, GPS and other devices can provide corresponding position parameters.
- Camera parameters and position parameters can be placed in the header data area of the resource, as shown in Figure 5-6.
- the integrated modules described in the embodiments of the present invention may also be stored in a computer readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product.
- the computer software product is stored in a storage medium and includes a plurality of instructions.
- a computer device (which may be a personal computer, server, or network device, etc.) is implemented to perform all or part of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. medium.
- ROM read-only memory
- RAM random access memory
- magnetic disk or an optical disk and the like, which can store program codes.
- an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to perform a method for inter-operating with a WLAN in a terminal cell handover according to an embodiment of the present invention.
- the image processing method of the embodiment of the present invention includes: the user terminal reports the collection parameter and the multimedia data resource based on different viewpoints; the user terminal is any user terminal whose location is not fixed; and the user terminal receives the collection parameter according to the multimedia data.
- the user terminal since the user terminal is any user terminal whose location is not fixed, the user who uses the user terminal may be referred to as a free user, and the data resources provided by the free user do not need to adjust the parameters of the camera in advance. The position of the camera, etc., from
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Studio Devices (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种图像处理方法、用户终端、图像处理终端及系统,其中,所述方法包括:用户终端上报基于不同视点的采集参数和多媒体数据资源;所述用户终端为位置不固定的任意用户终端;用户终端接收根据所述采集参数和所述多媒体数据资源得到的多视点合成的图像处理结果。
Description
一种图像处理方法、 用户终端、 图像处理终端及系统 技术领域
本发明涉及多媒体通信技术领域, 尤其涉及一种图像处理方法、 用户 终端、 图像处理终端及图像处理系统。 背景技术
随着传感器技术、 多媒体技术、 宽带通讯技术、 互联网技术的快速发 展, 特别是显示技术的发展, 以 3D为代表的显示技术日益成为当前关注的 热点, 可以预见不久的将来在移动电话、 电脑、 电视上将会出现越来越多 的 3D资源。
本申请发明人在实现本申请实施例技术方案的过程中, 至少发现现有 技术中存在如下技术问题:
以拍摄 3D图像场景下的图像处理为例, 如图 1所示, 多个摄像机作为 采集设备, 对一个被拍摄对象进行图像采集, 以生成 3D资源。 而为了得到 3D资源, 需要资源制作方如电视台等, 事先调整好摄像机的参数、 摄像机 的位置等,
即为了实现一个 3D资源的制作, 至少需要提前做出大量的准备工作, 且需 要较大的投资, 制作成本昂贵, 不适合大范围的推广, 针对这个问题, 相 关技术未存在有效的解决方案。 发明内容
有鉴于此, 本发明实施例在于提供一种图像处理方法、 用户终端、 图 像处理终端及图像处理系统, 能快捷、 稳定地获得 3D资源, 而且无需高昂 的制作成本。
为解决上述问题, 本发明实施例的技术方案是这样实现的: 一种图像处理方法, 所述方法包括:
用户终端上报基于不同视点的采集参数和多媒体数据资源; 所述用户 终端为位置不固定的任意用户终端;
用户终端接收根据所述采集参数和所述多媒体数据资源得到的多视点 合成的图像处理结果。
优选地, 所述采集参数包括: 拍摄参数、 拍摄位置中的至少一种。 优选地, 所述方法还包括: 接收到根据所述釆集参数和所述多媒体数 据资源得到的多视点合成的图像处理结果, 根据用户终端的显示参数配置 或硬件运行配置从所述多视点合成的图像处理结果中选择进行显示。
一种图像处理方法, 所述方法包括:
图像处理终端接收基于不同视点的釆集参数和多媒体数据资源; 图像处理终端根据所述采集参数和所述多媒体数据资源得到多视点合 成的图像处理结果。
优选地, 所述图像处理终端根据所述采集参数和所述多媒体数据资源 得到多视点合成的图像处理结果, 包括:
图像处理终端建立参考坐标和参考资源;
图像处理终端将建立的参考坐标和参考资源、 与所述采集参数和所述 多媒体数据资源进行比对, 对所述多媒体数据资源进行分组并从中选择出 与所述参考坐标和参考资源相匹配的分组数据;
图像处理终端对所述分组数据进行各个视点的数据合成, 得到多个不 同的视点数据集合并作为多视点合成的图像处理结果。
一种用户终端, 所述用户终端包括:
上报单元, 配置为上报基于不同视点的采集参数和多媒体数据资源; 第一接收单元, 配置为接收根据所述采集参数和所述多媒体数据资源
得到的多视点合成的图像处理结果;
所述用户终端为位置不固定的任意用户终端。
优选地, 所述采集参数包括: 拍摄参数、 拍摄位置中的至少一种。 优选地, 所述用户终端还包括:
显示单元, 配置为接收到根据所述采集参数和所述多媒体数据资源得 到的多视点合成的图像处理结果, 根据用户终端的显示参数配置或硬件运 行配置从所述多视点合成的图像处理结果中选择一个进行显示。
所述上报单元和所述第一接收单元在执行处理时, 釆用中央处理器
( CPU, Central Processing Unit ), 数字信号处理器(DSP, Digital Singnal Processor )或可编程 辑阵歹 'J ( FPGA, Field - Programmable Gate Array ) 实现。
一种图像处理终端, 所述图像处理终端包括:
第二接收单元 , 配置为接收基于不同视点的釆集参数和多媒体数据资 源;
图像处理单元, 配置为根据所述采集参数和所述多媒体数据资源得到 多视点合成的图像处理结果;
发送单元, 配置为发送所述图像处理结果。
优选地, 所述图像处理单元, 还包括:
建立子单元, 配置为建立参考坐标和参考资源;
比对子单元, 配置为将建立的参考坐标和参考资源、 与所述采集参数 和所述多媒体数据资源进行比对, 对所述多媒体数据资源进行分组并从中 选择出与所述参考坐标和参考资源相匹配的分组数据;
合成子单元, 配置为对所述分组数据进行各个视点的数据合成, 得到 多个不同的视点数据集合并作为多视点合成的图像处理结果。
所述第二接收单元、 所述图像处理单元、 所述发送单元、 所述建立子
单元、 所述比对子单元和所述合成子单元在执行处理时, 采用中央处理器
( CPU, Central Processing Unit )、 数字信号处理器(DSP, Digital Singnal Processor )或可编程逻辑阵列 ( FPGA, Field - Programmable Gate Array ) 实现。
一种图像处理系统, 包括: 用户终端和图像处理终端;
所述用户终端, 配置为上报基于不同视点的采集参数和多媒体数据资 源; 接收根据所述采集参数和所述多媒体数据资源得到的多视点合成的图 像处理结果; 所述用户终端为位置不固定的任意用户终端;
所述图像处理终端 , 配置为接收基于不同视点的采集参数和多媒体数 据资源; 根据所述采集参数和所述多媒体数据资源得到多视点合成的图像 处理结果; 发送所述图像处理结果。
优选地, 所述用户终端可以为上述任一项所述的用户终端; 所述图像 处理终端可以为如上述任一项所述的图像处理终端。
包括上述任一项所述的用户终端, 及上述任一项所述的图像处理终端。 本发明实施例的图像处理方法包括: 用户终端上报基于不同视点的采 集参数和多媒体数据资源; 所述用户终端为位置不固定的任意用户终端; 用户终端接收根据所述釆集参数和所述多媒体数据资源得到的多视点合成 的图像处理结果。 由于所述用户终端为位置不固定的任意用户终端, 则使 用这类用户终端的用户可以称为自由用户, 由自由用户所提供的数据资源 无需事先调整好摄像机的参数、 摄像机的位置等, 从而最终能快捷、 稳定 地获得 3D资源 , 而且无需高昂的制作成本。 附图说明
图 1为现有技术拍摄 3D图像场景下的图像采集示意图;
图 2为本发明实施例的图像处理方法实现流程图;
图 3为本发明实施例的图像处理方法实现流程图;
图 4为应用本发明实施例的一自由视点合成场景的流程图; 图 5为应用本发明实施例的一数据格式的示意图;
图 6为应用本发明实施例的一数据格式的示意图。 具体实施方式 下面结合附图对技术方案的实施作进一步的详细描述。
本发明实施例的图像处理方法, 如图 2所示, 包括以下步驟: 步驟 101 : 用户终端上报基于不同视点的采集参数和多媒体数据资源; 所述用户终端为位置不固定的任意用户终端。
这里, 基于不同视点的采集参数和多媒体数据资源, 是针对同一个拍 摄场景而言的。
这里, 所述多媒体数据资源包括: 文本、 图像和视频等在内的各种多 媒体数据资源
这里, 所述用户终端为位置不固定的任意用户终端, 则使用这类用户 终端的用户可以称为自由用户。
步驟 102:用户终端接收根据所述采集参数和所述多媒体数据资源得到 的多视点合成的图像处理结果。
这里需要指出的是: 步骤 102中接收合成的图像处理结果的用户终端、 与步骤 101 中上传源数据的用户终端可以是不同的终端, 也可以是同一终 端。
在本发明实施例一优选实施方式中, 所述采集参数包括: 所述用户终 端在对被采集对象进行釆集时的拍摄参数、 拍摄位置中的至少一种。
所述拍摄参数包括: 拍摄画质、 拍摄角度等。
在本发明实施例一优选实施方式中, 所述方法还包括: 接收到根据所 述釆集参数和所述多媒体数据资源得到的多视点合成的图像处理结果, 根 据用户终端的显示参数配置或硬件运行配置从所述多视点合成的图像处理
结果中选择进行显示。
这里需要指出的是: 针对所述选择进行显示而言, 一个用户终端可以 选择其中的一个或多个结果用于显示使用。
这里, 具体地, 显示参数配置为显示分辨率, 硬件运行配置可以为终 5 端运行速度和资源占用等参数配置, 从而, 可以根据显示分辨率、 或者终 端运行速度和资源占用等参数配置, 来选择一个结果用于显示, 该显示必 然是个最佳显示结果。
本发明实施例的图像处理方法, 如图 3所示, 包括以下步骤: 步骤 201 :图像处理终端接收基于不同视点的采集参数和多媒体数据资
10 源。
步步驟驟 220022::图图像像处处理理终终端端根根据据所所述述采采集集参参数数和和所所述述多多媒媒体体数数据据资资源源得得到到 多多视视点点合合成成的的图图像像处处理理结结果果,, 发发送送所所述述图图像像处处理理结结果果。。
这这里里需需要要指指出出的的是是:: 针针对对步步骤骤 220022 而而言言,, 得得到到多多视视点点合合成成的的图图像像处处理理 结结果果后后,, 可可以以发发送送所所述述图图像像处处理理结结果果给给用用户户终终端端,, 也也可可以以仅仅提提供供所所述述图图像像
11 55 处处理理结结果果用用于于后后续续使使用用,, 即即可可以以暂暂时时不不发发送送所所述述图图像像处处理理结结果果给给用用户户终终端端。。
也也就就是是说说,, 基基于于图图像像处处理理终终端端的的最最终终处处理理结结果果的的使使用用包包括括有有几几种种情情况况:: 11 )) 一一种种是是图图像像处处理理终终端端,, 比比如如此此时时图图像像处处理理终终端端为为一一个个服服务务器器,, 由由服服务务器器提提 供供一一个个链链接接,, 用用户户终终端端发发送送请请求求获获取取所所述述图图像像处处理理结结果果;; 22 ))——种种是是图图像像处处 理理终终端端,, 比比如如此此时时图图像像处处理理终终端端为为一一个个服服务务器器,, 是是服服务务器器直直接接发发送送给给用用户户 2200 终终端端,, 如如步步骤骤 220022的的情情况况;; 33 ))还还有有可可能能是是存存储储在在一一个个地地方方可可以以被被查查询询或或者者 其其他他操操作作。。
在在本本发发明明实实施施例例一一优优选选实实施施方方式式中中,, 步步骤骤 220022 中中的的所所述述图图像像处处理理终终端端 根根据据所所述述采采集集参参数数和和所所述述多多媒媒体体数数据据资资源源得得到到多多视视点点合合成成的的图图像像处处理理结结 果果,, 包包括括::
对所述分组数据进行各个视点的数据合成, 得到多个不同的视点数据 集合并作为多视点合成的图像处理结果。
本发明实施例的用户终端, 包括: 上报单元, 用于上报基于不同视点 的采集参数和多媒体数据资源; 第一接收单元, 用于接收根据所述采集参 数和所述多媒体数据资源得到的多视点合成的图像处理结果。
这里需要指出的是, 所述用户终端为位置不固定的任意用户终端。 在本发明实施例一优选实施方式中, 所述采集参数包括: 拍摄参数、 拍摄位置中的至少一种。
在本发明实施例一优选实施方式中, 所述用户终端还包括: 显示单元, 用于接收到根据所述采集参数和所述多媒体数据资源得到的多视点合成的 图像处理结果, 根据用户终端的显示参数配置或硬件运行配置从所述多视 点合成的图像处理结果中选择一个进行显示。
本发明实施例的图像处理终端, 也可以称为焦点设备, 该焦点设备可 以是某个性能较高的用户终端, 也可以是一个服务器。
在本发明实施例一优选实施方式中, 所述图像处理终端包括: 第二接 收单元, 用于接收基于不同视点的采集参数和多媒体数据资源; 图像处理 单元, 用于根据所述釆集参数和所述多媒体数据资源得到多视点合成的图 像处理结果; 发送单元, 用于发送所述图像处理结果。
在本发明实施例一优选实施方式中, 所述图像处理单元, 进一步包括: 建立子单元, 用于建立参考坐标和参考资源; 比对子单元, 用于将建立的 参考坐标和参考资源、 与所述采集参数和所述多媒体数据资源进行比对, 对所述多媒体数据资源进行分组并从中选择出与所述参考坐标和参考资源
相匹配的分组数据; 合成子单元, 用于对所述分组数据进行各个视点的数 据合成, 得到多个不同的视点数据集合并作为多视点合成的图像处理结果。
本发明实施例的图像处理系统包括上述用户终端和上述图像处理终 端。
本发明实施例的数据资源的来源不限于固定的用户 , 可以由各个自由 用户主动提供数据资源, 不限于 3D应用场景, 涵盖图像处理的各个场景, 这里为了描述方便, 以下以 3D应用场景为例进行描述。
3D应用场景下, 现有技术中, 为了得到 3D资源, 需要资源制作方如 电视台等, 事先调整好摄像机的参数、 摄像机的位置等, 也就是说, 目前 制作 3D 图像资源时需要在被拍摄对象周围按照一定的要求放置一定数量 的摄像机, 摄像机参数等在拍摄前已经被记录在案, 摄像机所处的位置根 据视点需要放置在不同角度、 高度等位置, 得到数据资源 (数据资源为多 媒体数据资源, 包括图像和视频资源)后, 对数据资源进行处理, 资源制 作方根据来自不同位置的数据及其相应的摄像机参数合成不同视点的 3D 视图, 最终通过节目呈现给用户。 在整个图像处理过程的核心是: 如何得 到数据资源, 而现有技术由于是固定用户, 固定的位置及事前准备所花费 的大量时间和成本。
本发明实施例在 3D应用场景下,为了避免固定用户对被拍摄对象进行 采集, 这种固定的位置及事前准备所花费的大量时间和成本, 提出了自由 用户也可以对被拍摄对象进行采集, 由于是自由用户, 因此无需事前对摄 像机位置进行设定, 对摄像机参数进行预先设定, 能节约大量时间和成本, 而且, 由于数据资源是自由用户提供的, 因此, 图像处理是一种自由视点 合成业务方案, 即通过当前大量的带有相关传感器的终端设备, 在其所摄 图像中携带有相关的采集参数(包括摄像机参数信息、 位置信息等), 并且 将相关图像资源交给焦点设备处理, 焦点设备根据一个坐标体系, 将所得
的图像资源信息判断并分组, 根据检测结果将这些信息进行视点合成, 形 成不同视点的数据,最后整合成多个视点的 3D数据,可以让不同用户使用。 本方案的最大特色就是数据的来源是自由的, 即对于同一个场景有很多用 户用自己的终端设备在不同角度获取的图像可以通过本业务将这些图像数 据进行视点合成, 得到不同角度的数据更加真实, 因此称之为自由视点合 成业务方案。
综上所述, 本发明实施例针对 3D应用场景, 是一种多媒体 3D多视点 合成的业务方案, 包括: 1 )各个自由用户将设备上的图像或视频资源发送 到焦点设备上, 焦点设备可以是某个性能较高的终端设备, 也可以是一个 服务器设备; 2 )焦点设备根据所设置的坐标评价体系, 将所获得的图像或 者视频资源进行分组处理, 判断出最优图像 (所谓最优不仅包括了拍摄的 图像质量,还包括拍摄的角度等),选取不同视点的最优图像进行视点合成, 形成不同视点的图像或者视频数据, 再把该合成后的数据提供给各个用户。
与现有技术相比, 应用本发明实施例, 针对 3D应用场景的优越性, 主 要体现在以下几个方面:
1、 不需要事先设定好各个设备的位置;
2、 允许用户定义相关参数和合成参数设置;
3、 充分利用了用户设备的能力。
方案一: 3D应用场景下的一个实现方案: 该方案为利用自由用户提供 的数据资源获得视点合成 3D图像或者 3D视频数据,该方案中包括用户 1 , 用户 2, ......用户 N, 用户 1 , 用户 2, ......用户 N所使用的设备作为各个 终端设备, 用于采集数据资源, 焦点设备用于对各个终端设备所采集汇总 的数据资源进行多视点合成的图像处理, 如图 4所示, 包括以下步驟: 步骤 301 : 各个用户通过各自的设备拍摄相应的物体,得到对应的图像 或视频资源, 该资源中需要携带摄像机参数和位置参数等, 之后, 焦点设
备提供和用户交互的信息, 如可以通过网络链接进行交换, 并提供相应的 存储空间, 各个用户 (用户 1 , 用户 2, ......用户 N )将数据资源上报焦点 设备处理。
这里, 数据资源包括: 所摄的图像或视频资源等多媒体数据资源; 上 报数据资源时, 也可以同时携带用户摄像机的参数信息、 位置信息等。
步驟 302: 焦点设备针对上报的数据资源进行视点合成的图像处理。 这里, 步骤 302包括: 1 )焦点设备设置选定参考坐标和参考资源, 设 定参数信息; 2 )焦点设备通过所设定的参数信息和所获得的图像或视频资 源, 根据资源中的参数信息和焦点设备中设定的参数信息进行比对判定, 对资源进行分组, 并对资源进行优化选择; 3 ) 分拣图像并分组根据条件做 视点合成图像处理, 即焦点设备通过对数据的优化选择, 焦点设备对分组 数据进行各个视点的数据合成, 形成不同的视点数据集合。
步骤 303: 焦点设备将视点合成得到的 3D数据存储, 并生成相应的信 息, 提供给用户。
本方案在自由视点合成中, 用户通过网络将所摄对象的图像或视频资 源提供给焦点设备, 焦点设备根据所设条件区分所获得的数据资源, 分拣 挑选合适数据资源进行视点合成, 形成不同视点的合成 3D数据, 最后提供 给用户使用。
方案二: 3D应用场景下的一个实现方案: 该方案为利用自由用户提供 的 3D数据资源获得视点合成 3D图像或者 3D视频数据后形成增强现实所 用的模版资源。
步骤 401 : 各个用户通过各自的设备拍摄相应的物体,得到对应的图像 或视频资源, 该资源中需要携带摄像机参数和位置参数等, 之后, 焦点设 备提供和用户交互的信息, 如可以通过网络链接进行交换, 并提供相应的 存储空间, 各个用户 (用户 1 , 用户 2, ......用户 N )将数据资源上报焦点
设备处理。
这里, 数据资源包括: 所摄的图像或视频资源, 用户摄像机的参数信 息、 位置信息等。
步骤 402: 焦点设备针对上报的数据资源进行视点合成的图像处理。 这里, 步骤 402包括: 1 )焦点设备设置选定参考坐标和参考资源, 设 定参数信息; 2 )焦点设备通过所设定的参数信息和所获得的图像或视频资 源 , 根据资源中的参数信息和焦点设备中设定的参数信息进行比对判定, 对资源进行分组, 并对资源进行优化选择; 3 ) 分拣图像并分组根据条件做 视点合成图像处理, 即焦点设备通过对数据的优化选择, 焦点设备对分组 数据进行各个视点的数据合成, 形成不同的视点数据集合。
步驟 403: 根据业务需要, 对视点合成数据进行再处理, 形成增强现实 业务的模板数据资源。
步骤 404: 焦点设备将视点合成数据存储并生成相应的信息,提供给用 户。 用户使用该模板数据资源可以享受增强现实业务所带来的方便。
针对模板数据资源而言, 举例来说, 在上述方案一中, 只是提供了一 个基本信息, 也可以说, 是将所有信息记录下来, 但是不能选择处理, 比 如拍摄得到一个欧冠赛场的信息, 包括球场和球员等等, 是不能进行选择 和后期修改的, 只能随着变化不断进行视点合成, 得到不同的结果, 而本 方案是提供一个模板数据资源, 可以对记录下来的信息进行选择处理, 可 以后期修改的, 比如拍摄得到一个欧冠赛场的信息, 包括球场和球员等等, 将这些信息作为一个模板资源, 如果在该球场换了球员进行比赛, 可以随 便更换数据, 以进行后期处理。
图 5-6为应用本发明实施例的数据格式的示意图,图像或者视频资源的 数据格式是在数据格式中增加相应的摄像机参数和对应该图像或视频资源 的拍摄位置参数。 摄像机参数由拍摄终端上的摄像头提供, 位置参数可以
由拍摄终端上的位置传感器设备获取, 如电子罗盘、 GPS 等设备可以提供 相应的位置参数。摄像机参数和位置参数除了如图 5-6所示, 还可以置于资 源的头数据区。
本发明实施例所述集成的模块如果以软件功能模块的形式实现并作为 独立的产品销售或使用时, 也可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明实施例的技术方案本质上或者说对现有技术做出 贡献的部分可以以软件产品的形式体现出来 , 该计算机软件产品存储在一 个存储介质中, 包括若干指令用以使得一台计算机设备(可以是个人计算 机、 服务器、 或者网络设备等)执行本发明各个实施例所述方法的全部或 部分。 而前述的存储介质包括: U盘、 移动硬盘、 只读存储器 (ROM, Read-On ly Memory )、 随机存取存储器 ( RAM, Random Access Memory )、 磁碟或者光盘等各种可以存储程序代码的介质。 这样, 本发明实施例不限 制于任何特定的硬件和软件结合。
相应的, 本发明实施例还提供一种计算机存储介质, 其中存储有计算 机程序, 该计算机程序用于执行本发明实施例的终端小区切换中与 WLAN 互操作决策的方法。
以上所述, 仅为本发明的较佳实施例而已, 并非用于限定本发明的保 护范围。 工业实用性
本发明实施例的图像处理方法包括: 用户终端上报基于不同视点的采 集参数和多媒体数据资源; 所述用户终端为位置不固定的任意用户终端; 用户终端接收根据所述采集参数和所述多媒体数据资源得到的多视点合成 的图像处理结果。 采用本发明实施例, 由于所述用户终端为位置不固定的 任意用户终端, 则使用这类用户终端的用户可以称为自由用户, 由自由用 户所提供的数据资源无需事先调整好摄像机的参数、 摄像机的位置等, 从
而最终能快捷、 稳定地获得 3D资源, 而且无需高昂的制作成本。
Claims
1、 一种图像处理方法, 所述方法包括:
用户终端上报基于不同视点的采集参数和多媒体数据资源; 所述用户 终端为位置不固定的任意用户终端;
用户终端接收根据所述采集参数和所述多媒体数据资源得到的多视点 合成的图像处理结果。
2、根据权利要求 1所述的方法, 其中, 所述采集参数包括: 拍摄参数、 拍摄位置中的至少一种。
3、 根据权利要求 1或 2所述的方法, 其中, 所述方法还包括: 接收到 根据所述采集参数和所述多媒体数据资源得到的多视点合成的图像处理结 果, 根据用户终端的显示参数配置或硬件运行配置从所述多视点合成的图 像处理结果中选择进行显示。
4、 一种图像处理方法, 所述方法包括:
图像处理终端接收基于不同视点的釆集参数和多媒体数据资源; 图像处理终端根据所述采集参数和所述多媒体数据资源得到多视点合 成的图像处理结果。
5、 根据权利要求 4所述的方法, 其中, 所述图像处理终端根据所述采 集参数和所述多媒体数据资源得到多视点合成的图像处理结果, 包括: 图像处理终端建立参考坐标和参考资源;
图像处理终端将建立的参考坐标和参考资源、 与所述采集参数和所述 多媒体数据资源进行比对, 对所述多媒体数据资源进行分组并从中选择出 与所述参考坐标和参考资源相匹配的分组数据;
图像处理终端对所述分组数据进行各个视点的数据合成, 得到多个不 同的视点数据集合并作为多视点合成的图像处理结果。
6、 一种用户终端, 所述用户终端包括:
上报单元, 用于上报基于不同视点的采集参数和多媒体数据资源; 第一接收单元, 用于接收根据所述采集参数和所述多媒体数据资源得 到的多视点合成的图像处理结果;
所述用户终端为位置不固定的任意用户终端。
7、 根据权利要求 6所述的用户终端, 其中, 所述采集参数包括: 拍摄 参数、 拍摄位置中的至少一种。
8、根据权利要求 6或 7所述的用户终端, 其中, 所述用户终端还包括: 显示单元, 用于接收到根据所述釆集参数和所述多媒体数据资源得到 的多视点合成的图像处理结果, 根据用户终端的显示参数配置或硬件运行 配置从所述多视点合成的图像处理结果中选择一个进行显示。
9、 一种图像处理终端, 所述图像处理终端包括:
第二接收单元, 配置为接收基于不同视点的采集参数和多媒体数据资 源;
图像处理单元, 配置为根据所述采集参数和所述多媒体数据资源得到 多视点合成的图像处理结果;
发送单元, 用于发送所述图像处理结果。
10、 根据权利要求 9所述的图像处理终端, 其中, 所述图像处理单元, 进一步包括:
建立子单元, 配置为建立参考坐标和参考资源;
比对子单元, 配置为将建立的参考坐标和参考资源、 与所述采集参数 和所述多媒体数据资源进行比对, 对所述多媒体数据资源进行分组并从中 选择出与所述参考坐标和参考资源相匹配的分组数据;
合成子单元, 配置为对所述分组数据进行各个视点的数据合成, 得到 多个不同的视点数据集合并作为多视点合成的图像处理结果。
11、 一种图像处理系统, 包括: 用户终端和图像处理终端;
所述用户终端, 配置为上报基于不同视点的采集参数和多媒体数据资 源; 接收根据所述采集参数和所述多媒体数据资源得到的多视点合成的图 像处理结果; 所述用户终端为位置不固定的任意用户终端;
所述图像处理终端, 配置为接收基于不同视点的采集参数和多媒体数 据资源; 根据所述采集参数和所述多媒体数据资源得到多视点合成的图像 处理结果; 发送所述图像处理结果。
12、 根据权利要求 11所述的系统, 其中,
所述用户终端为如权利要求 7至 8任一项所述的用户终端;
所述图像处理终端为如权利要求 10所述的图像处理终端。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310645595.1A CN104994369B (zh) | 2013-12-04 | 2013-12-04 | 一种图像处理方法、用户终端、图像处理终端及系统 |
CN201310645595.1 | 2013-12-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014183533A1 true WO2014183533A1 (zh) | 2014-11-20 |
Family
ID=51897680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/075687 WO2014183533A1 (zh) | 2013-12-04 | 2014-04-18 | 一种图像处理方法、用户终端、图像处理终端及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104994369B (zh) |
WO (1) | WO2014183533A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105872540A (zh) * | 2016-04-26 | 2016-08-17 | 乐视控股(北京)有限公司 | 视频处理的方法及装置 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107181938B (zh) * | 2016-03-11 | 2019-05-21 | 深圳超多维科技有限公司 | 图像显示方法和设备、图像分析方法、设备及系统 |
CN108038836B (zh) * | 2017-11-29 | 2020-04-17 | 维沃移动通信有限公司 | 一种图像处理方法、装置及移动终端 |
CN108566514A (zh) * | 2018-04-20 | 2018-09-21 | Oppo广东移动通信有限公司 | 图像合成方法和装置、设备、计算机可读存储介质 |
CN113784148A (zh) * | 2020-06-10 | 2021-12-10 | 阿里巴巴集团控股有限公司 | 数据处理方法、系统、相关设备和存储介质 |
CN114697516B (zh) * | 2020-12-25 | 2023-11-10 | 花瓣云科技有限公司 | 三维模型重建方法、设备和存储介质 |
CN113438462B (zh) * | 2021-06-04 | 2022-09-02 | 北京小米移动软件有限公司 | 一种多设备互联拍摄方法及装置、电子设备及存储介质 |
CN114638771B (zh) * | 2022-03-11 | 2022-11-29 | 北京拙河科技有限公司 | 基于混合模型的视频融合方法及系统 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1461560A (zh) * | 2001-03-15 | 2003-12-10 | 康斯坦丁迪斯·阿波斯托洛斯 | 用于实况或视频录像信号的实况多视点录像和重放的系统 |
CN101662693A (zh) * | 2008-08-27 | 2010-03-03 | 深圳华为通信技术有限公司 | 多视点媒体内容的发送和播放方法、装置及系统 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110242342A1 (en) * | 2010-04-05 | 2011-10-06 | Qualcomm Incorporated | Combining data from multiple image sensors |
US8872888B2 (en) * | 2010-10-01 | 2014-10-28 | Sony Corporation | Content transmission apparatus, content transmission method, content reproduction apparatus, content reproduction method, program and content delivery system |
JP5966256B2 (ja) * | 2011-05-23 | 2016-08-10 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに記録媒体 |
-
2013
- 2013-12-04 CN CN201310645595.1A patent/CN104994369B/zh not_active Expired - Fee Related
-
2014
- 2014-04-18 WO PCT/CN2014/075687 patent/WO2014183533A1/zh active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1461560A (zh) * | 2001-03-15 | 2003-12-10 | 康斯坦丁迪斯·阿波斯托洛斯 | 用于实况或视频录像信号的实况多视点录像和重放的系统 |
CN101662693A (zh) * | 2008-08-27 | 2010-03-03 | 深圳华为通信技术有限公司 | 多视点媒体内容的发送和播放方法、装置及系统 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105872540A (zh) * | 2016-04-26 | 2016-08-17 | 乐视控股(北京)有限公司 | 视频处理的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN104994369B (zh) | 2018-08-21 |
CN104994369A (zh) | 2015-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014183533A1 (zh) | 一种图像处理方法、用户终端、图像处理终端及系统 | |
US8767081B2 (en) | Sharing video data associated with the same event | |
US8810632B2 (en) | Apparatus and method for generating a three-dimensional image using a collaborative photography group | |
JP6419201B2 (ja) | ビデオ再生のための方法および装置 | |
CN104012106B (zh) | 使表示不同视点的视频对准 | |
CN107888987B (zh) | 一种全景视频播放方法及装置 | |
US20130300821A1 (en) | Selectively combining a plurality of video feeds for a group communication session | |
CN103916978B (zh) | 一种建立无线连接的方法及电子设备 | |
US20220030214A1 (en) | Generation and distribution of immersive media content from streams captured via distributed mobile devices | |
WO2018094866A1 (zh) | 一种基于无人机的全景直播方法及终端 | |
US9007531B2 (en) | Methods and apparatus for expanding a field of view in a video communication session | |
WO2014075413A1 (zh) | 一种确定待共享的终端的方法、装置和系统 | |
CN105701762B (zh) | 一种图片处理方法和电子设备 | |
KR20190038134A (ko) | 360 영상 라이브 스트리밍 서비스 방법 및 서버장치 | |
JP2016537692A (ja) | 画像識別を行うための方法およびシステム | |
US20130300885A1 (en) | Method, apparatus and computer-readable medium for image registration and display | |
CN108566514A (zh) | 图像合成方法和装置、设备、计算机可读存储介质 | |
WO2023020197A1 (zh) | 多媒体内容的接续处理方法和系统、及存储介质 | |
CN110662119A (zh) | 一种视频拼接方法及装置 | |
CN109963106B (zh) | 一种视频图像处理方法、装置、存储介质及终端 | |
WO2015192615A1 (zh) | 一种图像文件共享方法、装置和计算机存储介质 | |
US20210218933A1 (en) | Redundant array of inexpensive cameras | |
WO2015089944A1 (zh) | 一种处理视频会议画面的方法、装置及会议终端 | |
CN107733874A (zh) | 信息处理方法、装置、计算机设备和存储介质 | |
EP3513546B1 (en) | Systems and methods for segmented data transmission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14798493 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14798493 Country of ref document: EP Kind code of ref document: A1 |