CN104994369A - Image processing method, user terminal, and image processing terminal and system - Google Patents

Image processing method, user terminal, and image processing terminal and system Download PDF

Info

Publication number
CN104994369A
CN104994369A CN201310645595.1A CN201310645595A CN104994369A CN 104994369 A CN104994369 A CN 104994369A CN 201310645595 A CN201310645595 A CN 201310645595A CN 104994369 A CN104994369 A CN 104994369A
Authority
CN
China
Prior art keywords
user terminal
image processing
multimedia data
resources
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310645595.1A
Other languages
Chinese (zh)
Other versions
CN104994369B (en
Inventor
尚国强
李明
吴平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201310645595.1A priority Critical patent/CN104994369B/en
Priority to PCT/CN2014/075687 priority patent/WO2014183533A1/en
Publication of CN104994369A publication Critical patent/CN104994369A/en
Application granted granted Critical
Publication of CN104994369B publication Critical patent/CN104994369B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Abstract

The invention discloses an image processing method, a user terminal, and an image processing terminal and system, wherein the method comprises: a user terminal reporting acquisition parameters and multimedia data resources based on different viewpoints, the user terminal being any user terminal with unfixed positions; and the user terminal receiving an image processing result combined by multiple viewpoints obtained according to the acquisition parameters and multimedia data resources.

Description

A kind of image processing method, user terminal, image processing terminal and system
Technical field
The present invention relates to multimedia communication technology field, particularly relate to a kind of image processing method, user terminal, image processing terminal and image processing system.
Background technology
Along with the fast development of sensor technology, multimedia technology, broadband communications technology, Internet technology, the particularly development of Display Technique, take 3D as the focus that the Display Technique of representative becomes current concern day by day, can predict and will there will be increasing 3D resource in the near future on mobile phone, computer, TV.
Present inventor, in the process realizing the embodiment of the present application technical scheme, at least finds to there is following technical problem in prior art:
To take the image procossing under 3D rendering scene, as shown in Figure 1, multiple video camera, as collecting device, carries out IMAQ to a subject, to generate 3D resource.And in order to obtain 3D resource, need resources making side as TV station etc., adjust the parameter of video camera, the position etc. of video camera in advance, namely in order to realize the making of a 3D resource, at least need to make a large amount of preparation in advance, and need larger investment, cost of manufacture is expensive, be not suitable for large-scale popularization, for this problem, there is not effective solution in correlation technique.
Summary of the invention
In view of this, the embodiment of the present invention is to provide a kind of image processing method, user terminal, image processing terminal and image processing system, fast, stably can obtain 3D resource, and without the need to high cost of manufacture.
For solving the problem, the technical scheme of the embodiment of the present invention is achieved in that
A kind of image processing method, described method comprises:
User terminal to send up is based on the acquisition parameter of different points of view and multimedia data resources; Described user terminal is the unfixed any user terminal in position;
User terminal receives the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources.
Preferably, described acquisition parameter comprises: at least one in acquisition parameters, camera site.
Preferably, described method also comprises: the processing result image receiving the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources, runs configuration select to show from the processing result image that described multiple views synthesizes according to the configuration of the display parameters of user terminal or hardware.
A kind of image processing method, described method comprises:
Image processing terminal receives based on the acquisition parameter of different points of view and multimedia data resources;
Image processing terminal obtains the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources.
Preferably, described image processing terminal obtains the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources, comprising:
Image processing terminal sets up reference coordinate and reference resources;
Image processing terminal by the reference coordinate of foundation and reference resources, compare with described acquisition parameter and described multimedia data resources, carries out grouping to described multimedia data resources and therefrom selects the grouped data matched with described reference coordinate and reference resources;
Image processing terminal carries out the Data Synthesis of each viewpoint to described grouped data, obtains multiple different viewpoint data set and merges the processing result image synthesized as multiple views.
A kind of user terminal, described user terminal comprises:
Report unit, for reporting acquisition parameter based on different points of view and multimedia data resources;
First receiving element, for receiving the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources;
Described user terminal is the unfixed any user terminal in position.
Preferably, described acquisition parameter comprises: at least one in acquisition parameters, camera site.
Preferably, described user terminal also comprises:
Display unit, for receiving the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources, running configuration according to the configuration of the display parameters of user terminal or hardware and from the processing result image that described multiple views synthesizes, selecting one show.
A kind of image processing terminal, described image processing terminal comprises:
Second receiving element, for receiving acquisition parameter based on different points of view and multimedia data resources;
Graphics processing unit, for obtaining the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources;
Transmitting element, for sending described processing result image.
Preferably, described graphics processing unit, comprises further:
Set up subelement, for setting up reference coordinate and reference resources;
Comparer unit, for by the reference coordinate of foundation and reference resources, compare with described acquisition parameter and described multimedia data resources, grouping is carried out to described multimedia data resources and therefrom selects the grouped data matched with described reference coordinate and reference resources;
Synthon unit, for carrying out the Data Synthesis of each viewpoint to described grouped data, obtains multiple different viewpoint data set and merges the processing result image synthesized as multiple views.
A kind of image processing system, comprises the user terminal described in above-mentioned any one, and the image processing terminal described in above-mentioned any one.
The image processing method of the embodiment of the present invention comprises: user terminal to send up is based on the acquisition parameter of different points of view and multimedia data resources; Described user terminal is the unfixed any user terminal in position; User terminal receives the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources.Because described user terminal is the unfixed any user terminal in position, then use the user of this kind of user terminal can be called free user, the data resource provided by free user is without the need to adjusting the parameter of video camera, the position etc. of video camera in advance, thus finally fast, stably can obtain 3D resource, and without the need to high cost of manufacture.
Accompanying drawing explanation
Fig. 1 is the IMAQ schematic diagram under prior art shooting 3D rendering scene;
Fig. 2 is the image processing method realization flow figure of the embodiment of the present invention;
Fig. 3 is the image processing method realization flow figure of the embodiment of the present invention;
Fig. 4 is the flow chart of a free view-point synthesis scene of the application embodiment of the present invention;
Fig. 5 is the schematic diagram of a data format of the application embodiment of the present invention;
Fig. 6 is the schematic diagram of a data format of the application embodiment of the present invention.
Embodiment
Be described in further detail below in conjunction with the enforcement of accompanying drawing to technical scheme.
The image processing method of the embodiment of the present invention, as shown in Figure 2, comprises the following steps:
Step 101: user terminal to send up is based on the acquisition parameter of different points of view and multimedia data resources; Described user terminal is the unfixed any user terminal in position.
Here, based on acquisition parameter and the multimedia data resources of different points of view, for same photographed scene.
Here, described multimedia data resources comprises: the various multimedia data resources of text, image and video etc.
Here, described user terminal is the unfixed any user terminal in position, then use the user of this kind of user terminal can be called free user.
Step 102: user terminal receives the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources.
Here it is to be noted: receiving the user terminal of the processing result image of synthesis in step 102, can be different terminals from the user terminal uploading source data in step 101, also can be same terminal.
In the embodiment of the present invention one preferred implementation, described acquisition parameter comprises: the acquisition parameters of described user terminal when gathering collected object, at least one in camera site.
Described acquisition parameters comprises: shooting image quality, shooting angle etc.
In the embodiment of the present invention one preferred implementation, described method also comprises: the processing result image receiving the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources, runs configuration select to show from the processing result image that described multiple views synthesizes according to the configuration of the display parameters of user terminal or hardware.
Here it is to be noted: carry out showing for described selection, a user terminal can select one or more result for showing use.
Here, particularly, display parameters are configured to display resolution, it can be the parameter configuration such as terminal operating speed and resource occupation that hardware runs configuration, thus, can according to display resolution or the parameter configuration such as terminal operating speed and resource occupation, select a result for display, this display is individual best image result necessarily.
The image processing method of the embodiment of the present invention, as shown in Figure 3, comprises the following steps:
Step 201: image processing terminal receives based on the acquisition parameter of different points of view and multimedia data resources.
Step 202: image processing terminal obtains the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources, sends described processing result image.
Here it is to be noted: for step 202, after obtaining the processing result image of multiple views synthesis, described processing result image can be sent to user terminal, also described processing result image only can be provided for follow-up use, namely temporarily can not send described processing result image to user terminal.That is, use based on the final process result of image processing terminal includes several situation: 1) one is image processing terminal, such as now image processing terminal is a server, provides a link by server, and user terminal sends request and obtains described processing result image; 2) one is image processing terminal, and such as now image processing terminal is a server, is that server directly sends to user terminal, as the situation of step 202; 3) be likely also be stored in a place can be queried or other operations.
In the embodiment of the present invention one preferred implementation, the described image processing terminal in step 202 obtains the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources, comprising:
Image processing terminal sets up reference coordinate and reference resources;
Image processing terminal by the reference coordinate of foundation and reference resources, compare with described acquisition parameter and described multimedia data resources, carries out grouping to described multimedia data resources and therefrom selects the grouped data matched with described reference coordinate and reference resources;
Described grouped data is carried out to the Data Synthesis of each viewpoint, obtain multiple different viewpoint data set and merge the processing result image synthesized as multiple views.
The user terminal of the embodiment of the present invention, comprising: report unit, for reporting acquisition parameter based on different points of view and multimedia data resources; First receiving element, for receiving the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources.
Here it is pointed out that described user terminal is the unfixed any user terminal in position.
In the embodiment of the present invention one preferred implementation, described acquisition parameter comprises: at least one in acquisition parameters, camera site.
In the embodiment of the present invention one preferred implementation, described user terminal also comprises: display unit, for receiving the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources, running configuration according to the configuration of the display parameters of user terminal or hardware and from the processing result image that described multiple views synthesizes, selecting one show.
The image processing terminal of the embodiment of the present invention, also can be called focus device, and this focus device can be the user terminal that certain performance is higher, also can be a server.
In the embodiment of the present invention one preferred implementation, described image processing terminal comprises: the second receiving element, for receiving acquisition parameter based on different points of view and multimedia data resources; Graphics processing unit, for obtaining the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources; Transmitting element, for sending described processing result image.
In the embodiment of the present invention one preferred implementation, described graphics processing unit, comprises further: set up subelement, for setting up reference coordinate and reference resources; Comparer unit, for by the reference coordinate of foundation and reference resources, compare with described acquisition parameter and described multimedia data resources, grouping is carried out to described multimedia data resources and therefrom selects the grouped data matched with described reference coordinate and reference resources; Synthon unit, for carrying out the Data Synthesis of each viewpoint to described grouped data, obtains multiple different viewpoint data set and merges the processing result image synthesized as multiple views.
The image processing system of the embodiment of the present invention comprises above-mentioned user terminal and above-mentioned image processing terminal.
The source of the data resource of the embodiment of the present invention is not limited to the user fixed, initiatively can provide data resource by each free user, be not limited to 3D application scenarios, contain each scene of image procossing, here for convenience, be described for 3D application scenarios below.
Under 3D application scenarios, in prior art, in order to obtain 3D resource, need resources making side as TV station etc., adjust the parameter of video camera in advance, the position etc. of video camera, that is, the video camera placing some around subject according to certain requirement is needed during current making 3D rendering resource, camera parameters etc. are recorded in the case before shooting, position residing for video camera needs to be placed on different angles according to viewpoint, the position such as highly, (data resource is multimedia data resources to obtain data resource, comprise image and video resource) after, data resource is processed, resources making root is according to the 3D view from the data of diverse location and the different points of view of camera parameters synthesis accordingly thereof, user is presented to eventually through program.In the core of whole image processing process be: how to obtain data resource, and prior art is owing to being fixed-line subscriber, the plenty of time that fixing position and in advance preparation spend and cost.
The embodiment of the present invention is under 3D application scenarios, in order to avoid fixed-line subscriber gathers subject, the plenty of time that this fixing position and in advance preparation spend and cost, propose free user also to gather subject, owing to being free user, therefore without the need to setting camera position in advance, camera parameters is preset, plenty of time and cost can be saved, and, because data resource is from customer-furnished, therefore, image procossing is a kind of free view-point composition service scheme, namely by the current terminal equipment with related sensor in a large number, at it take the photograph in image and carry relevant acquisition parameter and (comprise camera parameters information, positional information etc.), and give focus device process by associated picture resource, focus device is according to a coordinate-system, the image resource information of gained is judged and divides into groups, according to testing result, these information are carried out View Synthesis, form the data of different points of view, finally be integrated into the 3D data of multiple viewpoint, different user can be allowed to use.The greatest feature of this programme is exactly the source of data is freely, namely same scene is had many consumers and by this business, these view data can be carried out View Synthesis with the image that the terminal equipment of oneself obtains in different angles, the data obtaining different angles are more true, are therefore referred to as free view-point composition service scheme.
In sum, the embodiment of the present invention is for 3D application scenarios, it is the operational program of a kind of multimedia 3D multiple views synthesis, comprise: 1) image on equipment or video resource are sent in focus device by each free user, focus device can be the terminal equipment that certain performance is higher, also can be a server apparatus; 2) focus device is according to set coordinate appraisement system, obtained image or video resource are carried out packet transaction, judge optimum image (the so-called optimum picture quality not only including shooting, also comprise the angle etc. of shooting), the optimum image choosing different points of view carries out View Synthesis, form image or the video data of different points of view, then the data after this synthesis are supplied to each user.
Compared with prior art, the application embodiment of the present invention, for the superiority of 3D application scenarios, is mainly reflected in the following aspects:
1, the position setting each equipment in advance is not needed;
2, user is allowed to define relevant parameter and synthetic parameters setting;
3, the ability of subscriber equipment is taken full advantage of.
An implementation under scheme one: 3D application scenarios: the program obtains View Synthesis 3D rendering or 3D video data for utilizing from customer-furnished data resource, the program comprises user 1, user 2 ... user N, user 1, user 2, the equipment that user N uses as each terminal equipment, for image data resource, focus device be used for each terminal equipment gather the image procossing that the data resource gathered carries out multiple views synthesis, as shown in Figure 4, comprise the following steps:
Step 301: each user takes corresponding object by respective equipment, obtain corresponding image or video resource, need to carry camera parameters and location parameter etc. in this resource, afterwards, focus device provides the information with user interactions, as exchanged by network linking, and corresponding memory space is provided, each user (user 1, and user 2 ... user N) data resource is reported focus device process.
Here, data resource comprises: the multimedia data resources such as the image taken the photograph or video resource; During reported data resource, also can carry the parameter information, positional information etc. of user's video camera simultaneously.
Step 302: focus device carries out the image procossing of View Synthesis for the data resource reported.
Here, step 302 comprises: 1) focus device arranges selected reference coordinate and reference resources, setup parameter information; 2) focus device is by set parameter information and the image that obtains or video resource, to compare judgement, divide into groups, and be in optimized selection resource resource according to the parameter information set in the parameter information in resource and focus device; 3) sort image and divide into groups to do View Synthesis image procossing according to condition, namely focus device is by the optimum choice to data, and focus device carries out the Data Synthesis of each viewpoint to grouped data, forms different viewpoint data acquisition systems.
Step 303: the 3D data that View Synthesis obtains by focus device store, and generate corresponding information, are supplied to user.
This programme is in free view-point synthesis, the image of taken the photograph object or video resource are supplied to focus device by network by user, the data resource that focus device obtains according to set condition district office, sorting is selected suitable data resource and is carried out View Synthesis, form the synthesis 3D data of different points of view, be finally supplied to user and use.
An implementation under scheme two: 3D application scenarios: the program is utilize to form augmented reality masterplate resource used after customer-furnished 3D data resource obtains View Synthesis 3D rendering or 3D video data.
Step 401: each user takes corresponding object by respective equipment, obtain corresponding image or video resource, need to carry camera parameters and location parameter etc. in this resource, afterwards, focus device provides the information with user interactions, as exchanged by network linking, and corresponding memory space is provided, each user (user 1, and user 2 ... user N) data resource is reported focus device process.
Here, data resource comprises: the image taken the photograph or video resource, the parameter information, positional information etc. of user's video camera.
Step 402: focus device carries out the image procossing of View Synthesis for the data resource reported.
Here, step 402 comprises: 1) focus device arranges selected reference coordinate and reference resources, setup parameter information; 2) focus device is by set parameter information and the image that obtains or video resource, to compare judgement, divide into groups, and be in optimized selection resource resource according to the parameter information set in the parameter information in resource and focus device; 3) sort image and divide into groups to do View Synthesis image procossing according to condition, namely focus device is by the optimum choice to data, and focus device carries out the Data Synthesis of each viewpoint to grouped data, forms different viewpoint data acquisition systems.
Step 403: according to service needed, carries out reprocessing to View Synthesis data, forms the template data resource of augmented reality business.
Step 404: View Synthesis data store and generate corresponding information by focus device, are supplied to user.The convenience that user uses this template data resource can enjoy augmented reality business to bring.
For template data resource, for example, in such scheme one, merely provide an essential information, alternatively, that all information is recorded, but process can not be selected, such as shooting obtains the information in a UEFA Champions League competition field, comprise court and sportsman etc., selection and later stage modify can not be carried out, constantly View Synthesis can only be carried out along with change, obtain different results, and this programme is to provide a template data resource, selection process can be carried out to the information recorded, can later stage modify, such as shooting obtains the information in a UEFA Champions League competition field, comprise court and sportsman etc., using these information as a template resource, if changed sportsman in this court to play, at will can change data, to carry out post-processed.
Fig. 5-6 is the schematic diagram of data format of the application embodiment of the present invention, and the data format of image or video resource in data format, increases corresponding camera parameters and to should the camera site parameter of image or video resource.Camera parameters is provided by the camera on camera terminal, and location parameter can be obtained, as the equipment such as electronic compass, GPS can provide corresponding location parameter by the positional sensor devices on camera terminal.Camera parameters and location parameter, except as seen in figs. 5-6, can also be placed in the head data field of resource.
If module integrated described in the embodiment of the present invention using the form of software function module realize and as independently production marketing or use time, also can be stored in a computer read/write memory medium.Based on such understanding, the technical scheme of the embodiment of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium, comprises some instructions and performs all or part of of method described in each embodiment of the present invention in order to make a computer equipment (can be personal computer, server or the network equipment etc.).And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.Like this, the embodiment of the present invention is not restricted to any specific hardware and software combination.
Accordingly, the embodiment of the present invention also provides a kind of computer-readable storage medium, wherein stores computer program, with the method for WLAN interoperability decision-making during this computer program switches for the terminal housing estate performing the embodiment of the present invention.
The above, be only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.

Claims (11)

1. an image processing method, is characterized in that, described method comprises:
User terminal to send up is based on the acquisition parameter of different points of view and multimedia data resources; Described user terminal is the unfixed any user terminal in position;
User terminal receives the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources.
2. method according to claim 1, is characterized in that, described acquisition parameter comprises: at least one in acquisition parameters, camera site.
3. method according to claim 1 and 2, it is characterized in that, described method also comprises: the processing result image receiving the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources, runs configuration select to show from the processing result image that described multiple views synthesizes according to the configuration of the display parameters of user terminal or hardware.
4. an image processing method, is characterized in that, described method comprises:
Image processing terminal receives based on the acquisition parameter of different points of view and multimedia data resources;
Image processing terminal obtains the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources.
5. method according to claim 4, is characterized in that, described image processing terminal obtains the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources, comprising:
Image processing terminal sets up reference coordinate and reference resources;
Image processing terminal by the reference coordinate of foundation and reference resources, compare with described acquisition parameter and described multimedia data resources, carries out grouping to described multimedia data resources and therefrom selects the grouped data matched with described reference coordinate and reference resources;
Image processing terminal carries out the Data Synthesis of each viewpoint to described grouped data, obtains multiple different viewpoint data set and merges the processing result image synthesized as multiple views.
6. a user terminal, is characterized in that, described user terminal comprises:
Report unit, for reporting acquisition parameter based on different points of view and multimedia data resources;
First receiving element, for receiving the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources;
Described user terminal is the unfixed any user terminal in position.
7. user terminal according to claim 6, is characterized in that, described acquisition parameter comprises: at least one in acquisition parameters, camera site.
8. the user terminal according to claim 6 or 7, is characterized in that, described user terminal also comprises:
Display unit, for receiving the processing result image of the multiple views synthesis obtained according to described acquisition parameter and described multimedia data resources, running configuration according to the configuration of the display parameters of user terminal or hardware and from the processing result image that described multiple views synthesizes, selecting one show.
9. an image processing terminal, is characterized in that, described image processing terminal comprises:
Second receiving element, for receiving acquisition parameter based on different points of view and multimedia data resources;
Graphics processing unit, for obtaining the processing result image of multiple views synthesis according to described acquisition parameter and described multimedia data resources;
Transmitting element, for sending described processing result image.
10. image processing terminal according to claim 9, is characterized in that, described graphics processing unit, comprises further:
Set up subelement, for setting up reference coordinate and reference resources;
Comparer unit, for by the reference coordinate of foundation and reference resources, compare with described acquisition parameter and described multimedia data resources, grouping is carried out to described multimedia data resources and therefrom selects the grouped data matched with described reference coordinate and reference resources;
Synthon unit, for carrying out the Data Synthesis of each viewpoint to described grouped data, obtains multiple different viewpoint data set and merges the processing result image synthesized as multiple views.
11. 1 kinds of image processing systems, is characterized in that, comprise the user terminal as described in any one of claim 6 to 8, and the image processing terminal as described in any one of claim 9 to 10.
CN201310645595.1A 2013-12-04 2013-12-04 A kind of image processing method, user terminal, image processing terminal and system Expired - Fee Related CN104994369B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310645595.1A CN104994369B (en) 2013-12-04 2013-12-04 A kind of image processing method, user terminal, image processing terminal and system
PCT/CN2014/075687 WO2014183533A1 (en) 2013-12-04 2014-04-18 Image processing method, user terminal, and image processing terminal and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310645595.1A CN104994369B (en) 2013-12-04 2013-12-04 A kind of image processing method, user terminal, image processing terminal and system

Publications (2)

Publication Number Publication Date
CN104994369A true CN104994369A (en) 2015-10-21
CN104994369B CN104994369B (en) 2018-08-21

Family

ID=51897680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310645595.1A Expired - Fee Related CN104994369B (en) 2013-12-04 2013-12-04 A kind of image processing method, user terminal, image processing terminal and system

Country Status (2)

Country Link
CN (1) CN104994369B (en)
WO (1) WO2014183533A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107181938A (en) * 2016-03-11 2017-09-19 深圳超多维光电子有限公司 Method for displaying image and equipment, image analysis method, equipment and system
CN108038836A (en) * 2017-11-29 2018-05-15 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108566514A (en) * 2018-04-20 2018-09-21 Oppo广东移动通信有限公司 Image combining method and device, equipment, computer readable storage medium
CN113438462A (en) * 2021-06-04 2021-09-24 北京小米移动软件有限公司 Multi-device interconnection shooting method and device, electronic device and storage medium
WO2021249414A1 (en) * 2020-06-10 2021-12-16 阿里巴巴集团控股有限公司 Data processing method and system, related device, and storage medium
CN114638771A (en) * 2022-03-11 2022-06-17 北京拙河科技有限公司 Video fusion method and system based on hybrid model
CN114697516A (en) * 2020-12-25 2022-07-01 花瓣云科技有限公司 Three-dimensional model reconstruction method, device and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872540A (en) * 2016-04-26 2016-08-17 乐视控股(北京)有限公司 Video processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461560A (en) * 2001-03-15 2003-12-10 康斯坦丁迪斯·阿波斯托洛斯 System for live multiple viewpoint recording and playback of live or video recorded signal
CN101662693A (en) * 2008-08-27 2010-03-03 深圳华为通信技术有限公司 Method, device and system for sending and playing multi-viewpoint media content
US20110242356A1 (en) * 2010-04-05 2011-10-06 Qualcomm Incorporated Combining data from multiple image sensors
CN103119957A (en) * 2010-10-01 2013-05-22 索尼公司 Content transmitting device, content transmitting method, content reproduction device, content reproduction method, program, and content delivery system
CN103548333A (en) * 2011-05-23 2014-01-29 索尼公司 Image processing device and method, supplement image generation device and method, program, and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461560A (en) * 2001-03-15 2003-12-10 康斯坦丁迪斯·阿波斯托洛斯 System for live multiple viewpoint recording and playback of live or video recorded signal
CN101662693A (en) * 2008-08-27 2010-03-03 深圳华为通信技术有限公司 Method, device and system for sending and playing multi-viewpoint media content
US20110242356A1 (en) * 2010-04-05 2011-10-06 Qualcomm Incorporated Combining data from multiple image sensors
CN103119957A (en) * 2010-10-01 2013-05-22 索尼公司 Content transmitting device, content transmitting method, content reproduction device, content reproduction method, program, and content delivery system
CN103548333A (en) * 2011-05-23 2014-01-29 索尼公司 Image processing device and method, supplement image generation device and method, program, and recording medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107181938A (en) * 2016-03-11 2017-09-19 深圳超多维光电子有限公司 Method for displaying image and equipment, image analysis method, equipment and system
CN108038836A (en) * 2017-11-29 2018-05-15 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108566514A (en) * 2018-04-20 2018-09-21 Oppo广东移动通信有限公司 Image combining method and device, equipment, computer readable storage medium
WO2021249414A1 (en) * 2020-06-10 2021-12-16 阿里巴巴集团控股有限公司 Data processing method and system, related device, and storage medium
CN114697516A (en) * 2020-12-25 2022-07-01 花瓣云科技有限公司 Three-dimensional model reconstruction method, device and storage medium
CN114697516B (en) * 2020-12-25 2023-11-10 花瓣云科技有限公司 Three-dimensional model reconstruction method, apparatus and storage medium
CN113438462A (en) * 2021-06-04 2021-09-24 北京小米移动软件有限公司 Multi-device interconnection shooting method and device, electronic device and storage medium
CN113438462B (en) * 2021-06-04 2022-09-02 北京小米移动软件有限公司 Multi-device interconnection shooting method and device, electronic device and storage medium
CN114638771A (en) * 2022-03-11 2022-06-17 北京拙河科技有限公司 Video fusion method and system based on hybrid model
CN114638771B (en) * 2022-03-11 2022-11-29 北京拙河科技有限公司 Video fusion method and system based on hybrid model

Also Published As

Publication number Publication date
WO2014183533A1 (en) 2014-11-20
CN104994369B (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN104994369A (en) Image processing method, user terminal, and image processing terminal and system
US20200112625A1 (en) Adaptive streaming of virtual reality data
CN107911644B (en) Method and device for carrying out video call based on virtual face expression
CN107079184B (en) Interactive binocular video display
CN110612721B (en) Video processing method and terminal equipment
CN104301769B (en) Method, terminal device and the server of image is presented
CN102800065A (en) Augmented reality equipment and method based on two-dimensional code identification and tracking
CN106170101A (en) Contents providing system, messaging device and content reproducing method
CN105657438A (en) Method and apparatus for processing panoramic live video resource
CN104715449A (en) Method and device for generating mosaic image
KR20170086203A (en) Method for providing sports broadcasting service based on virtual reality
CN102958114B (en) The method for accessing augmented reality user's context
CN104010206B (en) Based on the method and system of the virtual reality video playback in geographical position
KR102076139B1 (en) Live Streaming Service Method and Server Apparatus for 360 Degree Video
CN102984560B (en) The method and apparatus that video is played from breakpoint
CN104980643A (en) Picture processing method, terminal and system
CN104679879A (en) Intelligent storage method and intelligent storage device for photo
CN110192392A (en) Method and apparatus for deriving composite rail
JP2011233005A (en) Object displaying device, system, and method
CN108566514A (en) Image combining method and device, equipment, computer readable storage medium
CN108401163B (en) Method and device for realizing VR live broadcast and OTT service system
CN113012082A (en) Image display method, apparatus, device and medium
CN108430032A (en) A kind of method and apparatus for realizing that VR/AR device locations are shared
CN110349504A (en) A kind of museum guiding system based on AR
JP2018037755A (en) Information provide system regarding target object and information provision method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180515

Address after: 210012 No. 68, Bauhinia Road, Ningnan street, Yuhuatai District, Nanjing, Jiangsu

Applicant after: Nanjing Zhongxing Software Co., Ltd.

Address before: 518057 Nanshan District high tech Industrial Park, Shenzhen, Guangdong, Ministry of justice, Zhongxing Road, South China road.

Applicant before: ZTE Corporation

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191106

Address after: 518057 Nanshan District science and Technology Industrial Park, Guangdong high tech Industrial Park, ZTE building

Patentee after: ZTE Communications Co., Ltd.

Address before: 210012 Nanjing, Yuhuatai District, South Street, Bauhinia Road, No. 68

Patentee before: Nanjing Zhongxing Software Co., Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180821

Termination date: 20201204

CF01 Termination of patent right due to non-payment of annual fee