CN103458307B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN103458307B
CN103458307B CN201310395439.4A CN201310395439A CN103458307B CN 103458307 B CN103458307 B CN 103458307B CN 201310395439 A CN201310395439 A CN 201310395439A CN 103458307 B CN103458307 B CN 103458307B
Authority
CN
China
Prior art keywords
mrow
frame
image
thickness
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310395439.4A
Other languages
Chinese (zh)
Other versions
CN103458307A (en
Inventor
孙声鹏
黄瀚海
李新乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
TCL Optoelectronics Technology Huizhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Optoelectronics Technology Huizhou Co Ltd filed Critical TCL Optoelectronics Technology Huizhou Co Ltd
Priority to CN201310395439.4A priority Critical patent/CN103458307B/en
Publication of CN103458307A publication Critical patent/CN103458307A/en
Application granted granted Critical
Publication of CN103458307B publication Critical patent/CN103458307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A kind of image processing method, comprises the following steps:Receive the parameter value of user's input;The dimensional parameters of multigroup frame are obtained according to the parameter value calculation;According to source image information and the dimensional parameters of the multiple frame, target image is generated, the target image includes the multiple frame of nesting and the images among frame successively.The present invention also provides a kind of corresponding image processing apparatus.Above-mentioned image processing method and device, the parameter value of user's input can be for adjusting the thickness of frame or the quantity of frame etc., 3D effect is visually produced to target image to user by the frame of multiple nestings successively, and the frame parameter is set by the user, interactive function is realized, can be met the needs of different users.

Description

Image processing method and device
Technical field
The present invention relates to image processing field, more particularly to a kind of image processing method and device.
Background technology
Television set is that radio communication changes the life of the mankind, information propagation with broadcasting the product being combined, its appearance And the mode of thinking.Television set comes into huge numbers of families, nowadays, its function be not limited solely to watch programme televised live, Obtain Domestic News, receive long-distance education etc., while also serve as a part for interior decoration ornaments, be that family is essential One of household electrical appliances.Television set is the equipment of the visual pattern for the activity of electronically transmitting immediately.The transmitting terminal of television equipment is scape Each fine fraction of thing is order transmission after electric signal by brightness and chroma conversion.Corresponding geometric position is pressed in receiving terminal Show brightness and the colourity of each fine fraction to reappear view picture original image.
However, people are when seeing TV, what the distance of people and TV was usually fixed, as sofa and video screen away from From while picture size is also fixed, causes relatively simple in visual effect, and visual effect has certain limitation.
The content of the invention
Based on this, it is necessary to provide a kind of visual effect better image treating method and apparatus.
A kind of image processing method, comprises the following steps:
Receive the parameter value of user's input;
The dimensional parameters of multigroup frame are obtained according to the parameter value calculation;
According to source image information and the dimensional parameters of the multiple frame, target image is generated, the target image includes The multiple frame of nesting and the images among frame successively;Wherein, the thickness of the multiple frame is from the outside to the core successively Successively decrease.
In one of embodiment, the multiple frame is rectangle, and the thickness of its long side and broadside meets respectively from the outside to the core:
hwiFor the thickness of the broadside of i-th of frame from the outside to the core, hmiFor the thickness of the long side of i-th of frame from the outside to the core, The parameter value of user's input is at least one value in a, b, n, and its residual value is preset value.
In one of embodiment, the m is the half of the length of the target image, and the w is the target The half of the width of image.
In one of embodiment, in the dimensional parameters according to source image information and the multiple rectangular shaped rim, mesh is generated Logo image, in the step of target image includes multiple nested rectangular shaped rims successively and image among frame, The image among frame is mapped to obtain by the source image information, and mapping relations are according to following by the source images Scale smaller:
Or
A kind of image processing apparatus, including:
Input module, for receiving the parameter value of user's input;
Computing module, for obtaining the dimensional parameters of multigroup frame according to the parameter value calculation;
Image generation module, for the dimensional parameters according to source image information and the multiple frame, target image is generated, The target image includes the multiple frame of nesting and the images among frame successively;Wherein, the computing module meter The thickness of obtained the multiple frame successively decreases successively from the outside to the core.
In one of embodiment, the multiple frame that the computing module is calculated is rectangle, its long side and width The thickness on side meets respectively from the outside to the core:
hwiFor the thickness of the broadside of i-th of frame from the outside to the core, hmiFor the thickness of the long side of i-th of frame from the outside to the core, The parameter value of user's input is at least one value in a, b, n, and its residual value is preset value.
In one of embodiment, the m is the half of the length of the target image, and the w is the target The half of the width of image.
In one of embodiment, the image among frame of described image generation module generation is by the source Image information maps to obtain, and mapping relations are according to following scale smaller by the source images:
Or
Above-mentioned image processing method and device, the parameter value of user's input can be thickness or frame for adjusting frame Quantity etc., 3D effect, and the side are visually produced to target image to user by multiple nested frames successively Frame parameter is set by the user, and realizes interactive function, can be met the needs of different users.
The thickness of the multiple frame of nesting successively successively decreases successively from the outside to the core, can show the 3D effect for being really.
The long side of the multiple rectangular shaped rim and the thickness of broadside follow above-mentioned relation so that target image more conforms to three Tie up the stereoeffect of vision.
Brief description of the drawings
Fig. 1 is the step flow chart of the image processing method of a preferred embodiment of the present invention;
Fig. 2 is the display effect schematic diagram of target image described in Fig. 1 of the present invention;
Fig. 3 is seal ring thickness relation mathematical modeling schematic diagram in Fig. 1 of the present invention;
Fig. 4 is the functional block diagram of the image processing apparatus of a preferred embodiment of the present invention.
Embodiment
As shown in figure 1, it is the step flow chart of the image processing method of a preferred embodiment of the present invention, comprise the following steps:
Step S101, receive the parameter value of user's input.
Step S102, the dimensional parameters of multigroup frame are obtained according to the parameter value calculation.
Step S103, according to source image information and the dimensional parameters of the multiple frame, generate target image, the target Image includes the multiple frame of nesting and the images among frame successively.
In above-mentioned image processing method, the parameter value of user's input can be for adjusting the thickness of frame or the number of frame Amount etc., 3D effect is visually produced to target image to user by the frame of multiple nestings successively, and the frame is joined Number is set by the user, and is realized interactive function, can be met the needs of different users.
According to the near big and far smaller principle of eye observation object, so as to realize the 3D effect with the depth of field.In the present embodiment, The thickness of the multiple frame of nesting successively successively decreases successively from the outside to the core.As shown in Fig. 2 it is target image 400 described in Fig. 1 Display effect schematic diagram.Four rectangular shaped rims 401~404 are nested successively with target image 405, and the thickness of frame 401~404 Degree h successively decreases successively, such as settable black and white alternating, or other alternate colors.
Referring to Fig. 3, it is the rectangular shaped rim thickness relationship mathematical modeling schematic diagram of a preferred embodiment of the present invention, its In, O points are vision point, i.e. eye position, and it is located on the perpendicular bisector of screen 200, are a with screen distance.Because O is in screen On 200 perpendicular bisector, therefore m is the half of the length (being equal to the target image length) of screen 200;B is the vision depth of field Interval.Obtain O points and can determine that multiple frames with the line of multiple vision depth of field interval b critical point and the focus of screen 200 Broadside from the outside to the core thickness h w1、hw2..., it is easy to get according to the right angled triangle principle of similitude in Fig. 3 to following equation:
……
The right and left of two neighboring equation correspondingly subtracts each other, and calculating process is as follows:
Simple arrange can obtain:
The thickness of frame long side can similarly obtain, and can be obtained according to mathematical conversion:
hwiFor the thickness of the broadside of i-th of frame from the outside to the core, hmiFor the thickness of the long side of i-th of frame from the outside to the core, The m is the half of the length of screen 200, and the w is the half of the width of screen 200.N is frame quantity, institute The parameter value for stating user's input is at least one value in a, b, n.Its residual value, when user does not input, using the default of system Value.Frame shown in Fig. 3 has four, i.e. n=4.
What deserves to be explained is here when n=1 is, represent only to set a frame, be common frame, user can be with Other attributes of frame, such as color and the thickness parameters on each side of frame etc. are set.
Please refer to Fig. 2 simultaneously again, the original size of the source images is the size of screen 200, i.e., long 2*m, wide 2*w.If After putting frame, source images are mapped in frame, and its mapping relations only need to be scaled by source images, due to the contracting of length and width Small scale is identical, so the ratio of reducing is:
Or
The long side that the i.e. described image long side among frame is equal to source images subtracts All Border on long side direction Thickness sum.
In this way, user see be more real and meet the three-dimensional image of 3D vision.
As shown in figure 4, its functional block diagram for the image processing apparatus 50 of a preferred embodiment of the present invention, including:
Input module 501, for receiving the parameter value of user's input.
Computing module 503, for obtaining the dimensional parameters of multigroup frame according to the parameter value calculation.
Image generation module 505, for the dimensional parameters according to source image information and the multiple frame, generate target figure Picture, the target image include the multiple frame of nesting and the images among frame successively.
Above-mentioned image processing apparatus 50, the parameter value of user's input can be for adjusting the width of frame or the number of frame Amount etc., 3D effect, and the side are visually produced to target image to user by the rectangular shaped rim of multiple nestings successively Frame parameter is set by the user, and realizes interactive function, can be met the needs of different users.
According to the near big and far smaller principle of eye observation object, in the present embodiment, the multiple nested frame successively Thickness successively decreases successively from the outside to the core.
In the present embodiment, the frame that computing module 503 is calculated is rectangle, and its thickness relationship is as follows:
hwiFor the thickness of the broadside of i-th of frame from the outside to the core, hmiFor the thickness of the long side of i-th of frame from the outside to the core, The m is the half of the length of the target image, and the w is the half of the width of the target image, and n is side Frame quantity.The parameter value of user's input is at least one value in a, b, n.Its residual value, when user does not input, using system The preset value of system.
After setting frame, source images are mapped in frame, and image generation module 505 is scaled by source images, Because the diminution ratio of length and width is identical, so the ratio of reducing is:
Or
The long side that the i.e. described image long side among frame is equal to source images subtracts All Border on long side direction Thickness sum.
In this way, user see be more real and meet the three-dimensional image of 3D vision.
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (6)

1. a kind of image processing method, it is characterised in that comprise the following steps:
Receive the parameter value of user's input;
The dimensional parameters of multiple frames are obtained according to the parameter value calculation;
According to source image information and the dimensional parameters of the multiple frame, target image is generated, the target image includes multiple Nested frame and the image among frame successively;Wherein, the thickness of the multiple frame successively decreases successively from the outside to the core, The multiple frame is rectangle, and the thickness of its long side and broadside meets respectively from the outside to the core:
<mrow> <msub> <mi>hw</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>a</mi> <mo>*</mo> <mi>b</mi> <mo>*</mo> <mi>m</mi> </mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>*</mo> <mi>i</mi> <mo>)</mo> <mo>&amp;lsqb;</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mo>&amp;rsqb;</mo> </mrow> </mfrac> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>...</mo> <mo>...</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
<mrow> <msub> <mi>hm</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>a</mi> <mo>*</mo> <mi>b</mi> <mo>*</mo> <mi>w</mi> </mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>*</mo> <mi>i</mi> <mo>)</mo> <mo>&amp;lsqb;</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mo>&amp;rsqb;</mo> </mrow> </mfrac> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>...</mo> <mo>...</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
hwiFor the thickness of the broadside of i-th of frame from the outside to the core, hmiIt is described for the thickness of the long side of i-th of frame from the outside to the core The parameter value of user's input is at least one value in a, b, n, and its residual value is preset value.
2. image processing method according to claim 1, it is characterised in that the m is the length of the target image Half, the w are the half of the width of the target image.
3. image processing method according to claim 2, it is characterised in that according to source image information and the multiple square The dimensional parameters of shape frame, generate target image, and the target image includes multiple rectangular shaped rims of nesting successively and is embedded in In the step of image among frame, the image among frame is mapped to obtain by the source image information, and mapping is closed Be for by the source images according to following scale smaller:
Or
4. a kind of image processing apparatus, it is characterised in that comprise the following steps:
Input module, for receiving the parameter value of user's input;
Computing module, for obtaining the dimensional parameters of multigroup frame according to the parameter value calculation;
Image generation module, for the dimensional parameters according to source image information and the multiple frame, target image is generated, it is described Target image includes the multiple frame of nesting and the images among frame successively;Wherein, the computing module calculates To the thickness of the multiple frame successively decrease successively from the outside to the core, the multiple frame that the computing module is calculated is square The thickness of shape, its long side and broadside meets respectively from the outside to the core:
<mrow> <msub> <mi>hw</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>a</mi> <mo>*</mo> <mi>b</mi> <mo>*</mo> <mi>m</mi> </mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>*</mo> <mi>i</mi> <mo>)</mo> <mo>&amp;lsqb;</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mo>&amp;rsqb;</mo> </mrow> </mfrac> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>...</mo> <mo>...</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
<mrow> <msub> <mi>hm</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>a</mi> <mo>*</mo> <mi>b</mi> <mo>*</mo> <mi>w</mi> </mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>*</mo> <mi>i</mi> <mo>)</mo> <mo>&amp;lsqb;</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> <mo>&amp;rsqb;</mo> </mrow> </mfrac> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>...</mo> <mo>...</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
hwiFor the thickness of the broadside of i-th of frame from the outside to the core, hmiIt is described for the thickness of the long side of i-th of frame from the outside to the core The parameter value of user's input is at least one value in a, b, n, and its residual value is preset value.
5. image processing apparatus according to claim 4, it is characterised in that the m is the length of the target image Half, the w are the half of the width of the target image.
6. image processing apparatus according to claim 5, it is characterised in that described being embedded in of described image generation module generation Image among frame is mapped to obtain by the source image information, and mapping relations are according to following scale smaller by the source images:
Or
CN201310395439.4A 2013-09-02 2013-09-02 Image processing method and device Active CN103458307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310395439.4A CN103458307B (en) 2013-09-02 2013-09-02 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310395439.4A CN103458307B (en) 2013-09-02 2013-09-02 Image processing method and device

Publications (2)

Publication Number Publication Date
CN103458307A CN103458307A (en) 2013-12-18
CN103458307B true CN103458307B (en) 2017-12-12

Family

ID=49740194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310395439.4A Active CN103458307B (en) 2013-09-02 2013-09-02 Image processing method and device

Country Status (1)

Country Link
CN (1) CN103458307B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303491A (en) * 2015-05-27 2017-01-04 深圳超多维光电子有限公司 Image processing method and device
CN106303492A (en) * 2015-05-27 2017-01-04 深圳超多维光电子有限公司 Method for processing video frequency and device
CN106546233A (en) * 2016-10-31 2017-03-29 西北工业大学 A kind of monocular visual positioning method towards cooperative target

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005031644A1 (en) * 2003-09-29 2005-04-07 Vixs Systems Inc. Method and system for scaling images
CN1710943A (en) * 2005-06-22 2005-12-21 四川长虹电器股份有限公司 TV picture processing method
CN201118783Y (en) * 2007-11-09 2008-09-17 十速科技股份有限公司 Display controller with user-defined frame image
CN101512595A (en) * 2006-09-08 2009-08-19 艾利森电话股份有限公司 Image scaling method
CN102124733A (en) * 2008-08-26 2011-07-13 夏普株式会社 Television receiver and method for driving television receiver

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005031644A1 (en) * 2003-09-29 2005-04-07 Vixs Systems Inc. Method and system for scaling images
CN1710943A (en) * 2005-06-22 2005-12-21 四川长虹电器股份有限公司 TV picture processing method
CN101512595A (en) * 2006-09-08 2009-08-19 艾利森电话股份有限公司 Image scaling method
CN201118783Y (en) * 2007-11-09 2008-09-17 十速科技股份有限公司 Display controller with user-defined frame image
CN102124733A (en) * 2008-08-26 2011-07-13 夏普株式会社 Television receiver and method for driving television receiver

Also Published As

Publication number Publication date
CN103458307A (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN103475823A (en) Display device and image processing method for same
CN105898342A (en) Video multipoint co-screen play method and system
CN103546715A (en) Method and device for adjusting proportion of picture of smart television
CN102510508B (en) Detection-type stereo picture adjusting device and method
US20130141550A1 (en) Method, apparatus and computer program for selecting a stereoscopic imaging viewpoint pair
CN104202646A (en) Television picture display method and device, and television
CN103002349A (en) Adaptive adjustment method and device for video playing
CN103063193A (en) Method and device for ranging by camera and television
CN103458307B (en) Image processing method and device
CN103039082A (en) Image processing method and image display device according to the method
CN104159099B (en) The method to set up of binocular stereo camera during a kind of 3D three-dimensional film makes
CN101662695B (en) Method and device for acquiring virtual viewport
KR101408719B1 (en) An apparatus for converting scales in three-dimensional images and the method thereof
CN102547314A (en) Method and device for real-time three-dimensional conversion of two-dimensional digital images
WO2022042111A1 (en) Video image display method and apparatus, and multimedia device and storage medium
CN114449303A (en) Live broadcast picture generation method and device, storage medium and electronic device
CN204559792U (en) Device and the display device of stereo-picture or video is shown for display screen
CN108197373A (en) D CAD figure based on CoFrac turns three-dimensional VR design systems
CN102521876A (en) Method and system for realizing three dimensional (3D) stereoscopic effect of user interface
CN102780900B (en) Image display method of multi-person multi-view stereoscopic display
CN103391446A (en) Depth image optimizing method based on natural scene statistics
CN106028018B (en) Real scene shooting double vision point 3D method for optimizing video and system towards naked eye 3D display
WO2013024847A1 (en) Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program
CN103501433B (en) A kind of 3D painting and calligraphy display packing and device
US11388378B2 (en) Image processing apparatus, image processing method and projection system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190329

Address after: 518000 9th Floor, D4 Building, International E City, 1001 Zhongshan Garden Road, Xili Street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen TCL New Technology Co., Ltd.

Address before: 516006 No. 78 Huifeng 4th Road, Zhongkai High-tech Development Zone, Huizhou City, Guangdong Province

Patentee before: TCL Optoelectronic Technology (Huizhou) Co., Ltd.

TR01 Transfer of patent right