CN107248137A - A kind of method and mobile terminal for realizing image procossing - Google Patents

A kind of method and mobile terminal for realizing image procossing Download PDF

Info

Publication number
CN107248137A
CN107248137A CN201710286268.XA CN201710286268A CN107248137A CN 107248137 A CN107248137 A CN 107248137A CN 201710286268 A CN201710286268 A CN 201710286268A CN 107248137 A CN107248137 A CN 107248137A
Authority
CN
China
Prior art keywords
mrow
image
pixel
stitching
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710286268.XA
Other languages
Chinese (zh)
Other versions
CN107248137B (en
Inventor
戴向东
王猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710286268.XA priority Critical patent/CN107248137B/en
Publication of CN107248137A publication Critical patent/CN107248137A/en
Application granted granted Critical
Publication of CN107248137B publication Critical patent/CN107248137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/14
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The embodiments of the invention provide a kind of method and mobile terminal for realizing image procossing, including:Calculate the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition;The depth value obtained according to calculating determines three splicing regions being made up of when being spliced the first multi-view image and the second multi-view image;Brightness adjustment is carried out to splicing regions according to the monochrome information and distance parameter of splicing regions, and shooting image is generated according to the brightness value after adjustment and stitching image is generated according to the brightness value after adjustment;Wherein, three splicing regions include the first stitching image region, middle stitching image region, the second stitching image region of stitching image;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.The embodiment of the present invention avoids light and shade change during image synthesis, improves the display quality of stitching image, improves the usage experience of user.

Description

A kind of method and mobile terminal for realizing image procossing
Technical field
The present invention relates to multimedia technology, espespecially a kind of method and mobile terminal for realizing image procossing.
Background technology
By the binocular camera on mobile terminal take pictures that visual angle can be improved, but the flash lamp of mobile terminal is set For in the side of binocular camera so that when close shot is shot, due to binocular camera and the distance problem of flash lamp, make Obtain the light filling effect that the light filling effect of the camera acquisition of the side away from flash lamp and the camera of close flash lamp are obtained There is difference, obvious comparison of light and shade occur, cause when carrying out image synthesis, occur in that obvious light and shade change, edge part Picture material resolution can not be carried out by dividing.
The content of the invention
For above-mentioned technical problem, the embodiments of the invention provide a kind of method and mobile terminal for realizing image procossing, Image displaying quality and the usage experience of user can be improved.
The embodiments of the invention provide a kind of method for realizing image procossing, including:
Calculate the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition;
The depth value obtained according to calculating determines what is be made up of when being spliced the first multi-view image and the second multi-view image Three splicing regions;
According to the monochrome information and distance parameter of splicing regions to splicing regions carry out brightness adjustment, and according to adjustment after Brightness value generates shooting image and generates stitching image according to the brightness value after adjustment;
Wherein, the first stitching image region of three splicing regions including stitching image, middle stitching image region, Second stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
Optionally, first multi-view image is LOOK LEFT image, and second multi-view image is LOOK RIGHT image, described Calculating the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition includes:
To each pixel in first multi-view image, searched for by image matching technology from the second multi-view image with The match point of the pixel Point matching, the depth value of the pixel is calculated according to triangulation technique.
Optionally, first multi-view image is LOOK LEFT image, and second multi-view image is LOOK RIGHT image, described First stitching image region is left side stitching image region, and the second stitching image region is right side stitching image region;Institute State three spellings being made up of when the depth value obtained according to calculating determines to be spliced the first multi-view image and the second multi-view image Connecing region includes:
By the left boundary of the pixel-map of the right area of the LOOK LEFT image to the right side stitching image region; By the right border of the pixel-map of the coordinates regional of the LOOK RIGHT image to the left side stitching image region;
Wherein, the pixel of the right area of the LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to The pixel coordinate x1=x-Dl (x, y) of the left boundary in the right side stitching image region, y1=y;The LOOK RIGHT image The pixel of left area is Pr (x, y), when depth value is Dr (x, y), is mapped to the right side in the left side stitching image region The pixel coordinate x2=x+Dr (x, y) on boundary, y2=y.
Optionally, the monochrome information includes the pixel intensity average M1 in the first stitching image region, middle stitching image The pixel intensity average M2 in region, the pixel intensity average M3 in the second stitching image region, the distance parameter include three institutes State the center point P 1, the center point P 2 in middle stitching image region, the second spliced map in the first stitching image region of splicing regions As the center point P 3 in region, the space length D12 of center point P 1 and P2, center point P 2 and P3 space length D23.
Optionally, the monochrome information and distance parameter according to splicing regions carries out brightness adjustment bag to splicing regions Include:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image The distance of the center point P 2 in region is D1(x, y), its brightness adjustment is:
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image The distance of the center point P 2 in region is D2(x, y), its brightness adjustment is:
On the other hand, the embodiment of the present invention also provides a kind of mobile terminal, including:
First camera, is configured to shoot the first multi-view image;
Second camera, is configured to shoot the second multi-view image;
Be stored with the memory of picture processing program;
Processor, is configured to perform described image processing routine to perform operations described below:
Calculate the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition;
The depth value obtained according to calculating determines what is be made up of when being spliced the first multi-view image and the second multi-view image Three splicing regions;
According to the monochrome information and distance parameter of splicing regions to splicing regions carry out brightness adjustment, and according to adjustment after Brightness value generates shooting image and generates stitching image according to the brightness value after adjustment;
Wherein, the first stitching image region of three splicing regions including stitching image, middle stitching image region, Second stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
Optionally, first multi-view image is LOOK LEFT image, and second multi-view image is LOOK RIGHT image, described Processor, is configured to perform described image processing routine to calculate the first multi-view image and the second visual angle of binocular camera acquisition The depth value of the pixel of each in image includes:
To each pixel in first multi-view image, searched for by image matching technology from the second multi-view image with The match point of the pixel Point matching, the depth value of the pixel is calculated according to triangulation technique.
Optionally, first multi-view image is LOOK LEFT image, and second multi-view image is LOOK RIGHT image, described First stitching image region is left side stitching image region, and the second stitching image region is right side stitching image region;Institute Processor is stated, by first when being configured to perform described image processing routine to determine to be spliced according to the depth value for calculating acquisition Three splicing regions of multi-view image and the second multi-view image composition include:
By the left boundary of the pixel-map of the right area of the LOOK LEFT image to the right side stitching image region; By the right border of the pixel-map of the coordinates regional of the LOOK RIGHT image to the left side stitching image region;
Wherein, the pixel of the right area of the LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to The pixel coordinate x1=x-Dl (x, y) of the left boundary in the right side stitching image region, y1=y;The LOOK RIGHT image The pixel of left area is Pr (x, y), when depth value is Dr (x, y), is mapped to the right side in the left side stitching image region The pixel coordinate x2=x+Dr (x, y) on boundary, y2=y.
Optionally, the monochrome information includes the pixel intensity average M1 in the first stitching image region, middle stitching image The pixel intensity average M2 in region, the pixel intensity average M3 in the second stitching image region, the distance parameter include three institutes State the center point P 1, the center point P 2 in middle stitching image region, the second spliced map in the first stitching image region of splicing regions As the center point P 3 in region, the space length D12 of center point P 1 and P2, center point P 2 and P3 space length D23;The processing Device, is configured to perform described image processing routine with the monochrome information and distance parameter according to splicing regions to splicing regions progress Brightness adjustment includes:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image The distance of the center point P 2 in region is D1(x, y), its brightness adjustment is:
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image The distance of the center point P 2 in region is D2(x, y), its brightness adjustment is:
Another further aspect, the embodiment of the present invention also provides a kind of computer-readable recording medium, the computer-readable storage Media storage has one or more program, and one or more of programs can be by one or more computing device with reality The step of method of the existing above-mentioned image procossing stated.
Compared with correlation technique, technical scheme of the embodiment of the present invention includes:Calculate the first visual angle that binocular camera is obtained The depth value of each pixel in image and the second multi-view image;By the when the depth value obtained according to calculating determines to be spliced Three splicing regions of one multi-view image and the second multi-view image composition;According to the monochrome information and distance parameter pair of splicing regions Splicing regions carry out brightness adjustment, and generate shooting image according to the brightness value after adjustment and generated according to the brightness value after adjustment Stitching image;Wherein, three splicing regions include the first stitching image region of stitching image, middle stitching image region, the Two stitching image regions;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.The present invention Embodiment avoids light and shade change during image synthesis, improves the display quality of stitching image, improves the use body of user Test.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, this hair Bright schematic description and description is used to explain the present invention, does not constitute inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 illustrates for the hardware configuration of realization each optional mobile terminal of embodiment one of the invention;
Fig. 2 is the flow chart that the embodiment of the present invention realizes the method that image is shown;
Fig. 3 is binocular camera of the embodiment of the present invention and the position relationship schematic diagram of flash lamp;
Fig. 4 a are the schematic diagram of LOOK LEFT image of the embodiment of the present invention;
Fig. 4 b are the schematic diagram of LOOK RIGHT image of the embodiment of the present invention;
Fig. 4 c are the schematic diagram of pixel depth value of the embodiment of the present invention;
Fig. 5 is triangulation technique schematic diagram of the embodiment of the present invention;
Fig. 6 is the composition schematic diagram of splicing regions of the embodiment of the present invention;
Fig. 7 is the flow chart that another embodiment of the present invention realizes the method that image is shown;
Fig. 8 is the structured flowchart of mobile terminal of the embodiment of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can be mixed Ground is used.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. are moved Move the fixed terminals such as terminal, and numeral TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of realization each embodiment of the invention, the shifting Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit 108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1 Mobile terminal structure does not constitute the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts, Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station Downlink information receive after, handled to processor 110;In addition, up data are sent into base station.Generally, radio frequency unit 101 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrating Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user's transmitting-receiving electricity by WiFi module 102 Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 1 00 When under the isotypes such as formula, speech recognition mode, broadcast reception mode, it is that radio frequency unit 101 or WiFi module 102 are received or The voice data stored in memory 109 is converted into audio signal and is output as sound.Moreover, audio output unit 103 The audio output related to the specific function that mobile terminal 1 00 is performed can also be provided (for example, call signal receives sound, disappeared Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042,1041 pairs of graphics processor is in video acquisition mode Or the view data progress of the static images or video obtained in image capture mode by image capture apparatus (such as camera) Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after being handled through graphics processor 1041 can be deposited Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model. Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound The noise produced during frequency signal or interference.
Mobile terminal 1 00 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 1 00 is moved in one's ear Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.; The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can be wrapped Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch control operation of the user on or near it (such as user is using any suitable objects such as finger, stylus or annex on contact panel 1071 or in contact panel 1071 Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection Two parts of device and touch controller.Wherein, touch detecting apparatus detects the touch orientation of user, and detects touch control operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it It is converted into contact coordinate, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can be wrapped Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc. One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or After neighbouring touch control operation, processor 110 is sent to determine the type of touch event, with preprocessor 110 according to touch thing The type of part provides corresponding visual angle output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel 1061 be input and the output function that mobile terminal is realized as two independent parts, but in certain embodiments, can By contact panel 1071 and the input that is integrated and realizing mobile terminal of display panel 1061 and output function, not do specifically herein Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 1 00.For example, External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 1 00 or can be with For transmitting data between mobile terminal 1 00 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area And storage data field, wherein, application program (the such as sound that storing program area can be needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data field can be stored uses created data (such as according to mobile phone Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, it can also include non-easy The property lost memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection Individual part, by operation or performs and is stored in software program and/or module in memory 109, and calls and be stored in storage Data in device 109, perform the various functions and processing data of mobile terminal, so as to carry out integral monitoring to mobile terminal.Place Reason device 110 may include one or more processing units;It is preferred that, processor 110 can integrated application processor and modulatedemodulate mediate Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 1 00 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111 Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 1 00 can also will not be repeated here including bluetooth module etc..
Based on above-mentioned mobile terminal hardware configuration, each embodiment of the inventive method is proposed.
Fig. 2 is the flow chart that the embodiment of the present invention realizes the method that image is shown, as shown in Fig. 2 including:
Each pixel in the first multi-view image and the second multi-view image that step 200, calculating binocular camera are obtained Depth value;
Step 201, according to calculate obtain depth value determine to be spliced when by the first multi-view image and the second visual angle figure As three splicing regions of composition;
Step 202, according to the monochrome information and distance parameter of splicing regions splicing regions are carried out with brightness adjustment, and according to Brightness value after adjustment generates shooting image and generates stitching image according to the brightness value after adjustment;
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.Fig. 3 is this hair The position relationship schematic diagram of bright embodiment binocular camera and flash lamp, as shown in figure 3, wherein, the first camera obtains first Multi-view image, second camera obtains the second multi-view image, and second camera is close to flash lamp.
Optionally, the first multi-view image is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, calculates binocular camera shooting The depth value of each pixel includes in the first multi-view image and the second multi-view image that head is obtained:
To each pixel in the first multi-view image, searched for and the picture from the second multi-view image by image matching technology The match point of vegetarian refreshments matching, the depth value of the pixel is calculated according to triangulation technique.
Fig. 4 a~c is the schematic diagram of matching treatment of the embodiment of the present invention, wherein, Fig. 4 a are LOOK LEFT figure of the embodiment of the present invention The schematic diagram of picture, Fig. 4 b are that the pixel 1 in the schematic diagram of LOOK RIGHT image of the embodiment of the present invention, Fig. 4 a passes through images match skill Art matches pixel 2 in fig. 4b, and the matching of pixel can be compared according to the color of pixel and the similarity of brightness Confirm, find after corresponding match point, the depth value of pixel in Fig. 4 c can be calculated according to triangulation technique.Fig. 5 is Triangulation technique schematic diagram of the embodiment of the present invention, as shown in figure 5, optical centers of the Cleft for the camera in left side, Cright For the optical center of the camera on right side, Oleft is the center of LOOK LEFT image, and Oright is the center of LOOK RIGHT image, and P is In physical space a bit, Pleft be imaging point of the P points in the camera image in left side, Pright be P points taking the photograph on right side As the imaging point in head image, f is the focal length of camera lens, and Z is the distance between P points to camera, and T is between two camera Distance, from triangle relation:Depth=Pleft-Pright, Z=f*T/Depth.
Optionally, the multi-view image of the embodiment of the present invention first is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, First stitching image region is left side stitching image region, and the second stitching image region is right side stitching image region;According to meter Calculate three splicing regions bags that the depth value obtained determines to be made up of the first multi-view image and the second multi-view image when being spliced Include:
By the left boundary of the pixel-map of the right area of LOOK LEFT image to right side stitching image region;By LOOK RIGHT The right border of the pixel-map of the coordinates regional of image to left side stitching image region;
Wherein, the pixel of the right area of LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to right side The pixel coordinate x1=x-Dl (x, y) of the left boundary in stitching image region, y1=y;The picture of the left area of LOOK RIGHT image Element is Pr (x, y), when depth value is Dr (x, y), is mapped to the pixel coordinate x2=x on the right border in left side stitching image region + Dr (x, y), y2=y.Fig. 6 is the composition schematic diagram of splicing regions of the embodiment of the present invention, as shown in fig. 6, wherein, LOOK LEFT figure Left boundary of the pixel-map of the right area of picture to right side stitching image region;The pixel of the coordinates regional of LOOK RIGHT image It is mapped to the right border in left side stitching image region.
Optionally, monochrome information of the embodiment of the present invention includes the pixel intensity average M1 in the first stitching image region, centre The pixel intensity average M2 in stitching image region, the pixel intensity average M3 in the second stitching image region, distance parameter include three The center point P 1 in the first stitching image region of individual splicing regions, the center point P 2 in middle stitching image region, the second spliced map As the center point P 3 in region, the space length D12 of center point P 1 and P2, center point P 2 and P3 space length D23.
Optionally, the embodiment of the present invention carries out brightness according to the monochrome information and distance parameter of splicing regions to splicing regions Adjustment includes:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image region Center point P 2 distance be D1(x, y), its brightness adjustment is:
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image region Center point P 2 distance be D2(x, y), its brightness adjustment is:
Compared with correlation technique, technical scheme of the embodiment of the present invention includes:Calculate the first visual angle that binocular camera is obtained The depth value of each pixel in image and the second multi-view image;By the when the depth value obtained according to calculating determines to be spliced Three splicing regions of one multi-view image and the second multi-view image composition;According to the monochrome information and distance parameter pair of splicing regions Splicing regions carry out brightness adjustment, and generate shooting image according to the brightness value after adjustment and generated according to the brightness value after adjustment Stitching image;Wherein, three splicing regions include the first stitching image region of stitching image, middle stitching image region, the Two stitching image regions;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.The present invention Embodiment avoids light and shade change during image synthesis, improves the display quality of stitching image, improves the use body of user Test.
Fig. 7 is the flow chart that another embodiment of the present invention realizes the method that image is shown, as shown in fig. 7, comprises:
Step 700, pass through binocular camera and obtain the first multi-view image and the second multi-view image;
Each pixel in the first multi-view image and the second multi-view image that step 701, calculating binocular camera are obtained Depth value;
Step 702, according to calculate obtain depth value determine to be spliced when by the first multi-view image and the second visual angle figure As three splicing regions of composition;
Step 703, according to the monochrome information and distance parameter of splicing regions splicing regions are carried out with brightness adjustment, and according to Brightness value after adjustment generates shooting image and generates stitching image according to the brightness value after adjustment;
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
Optionally, the first multi-view image is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, calculates binocular camera shooting The depth value of each pixel includes in the first multi-view image and the second multi-view image that head is obtained:
To each pixel in the first multi-view image, searched for and the picture from the second multi-view image by image matching technology The match point of vegetarian refreshments matching, the depth value of the pixel is calculated according to triangulation technique.
Optionally, the multi-view image of the embodiment of the present invention first is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, First stitching image region is left side stitching image region, and the second stitching image region is right side stitching image region;According to meter Calculate three splicing regions bags that the depth value obtained determines to be made up of the first multi-view image and the second multi-view image when being spliced Include:
By the left boundary of the pixel-map of the right area of LOOK LEFT image to right side stitching image region;By LOOK RIGHT The right border of the pixel-map of the coordinates regional of image to left side stitching image region;
Wherein, the pixel of the right area of LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to right side The pixel coordinate x1=x-Dl (x, y) of the left boundary in stitching image region, y1=y;The picture of the left area of LOOK RIGHT image Element is Pr (x, y), when depth value is Dr (x, y), is mapped to the pixel coordinate x2=x on the right border in left side stitching image region + Dr (x, y), y2=y.
Optionally, monochrome information of the embodiment of the present invention includes the pixel intensity average M1 in the first stitching image region, centre The pixel intensity average M2 in stitching image region, the pixel intensity average M3 in the second stitching image region, distance parameter include three The center point P 1 in the first stitching image region of individual splicing regions, the center point P 2 in middle stitching image region, the second spliced map As the center point P 3 in region, the space length D12 of center point P 1 and P2, center point P 2 and P3 space length D23.
Optionally, the embodiment of the present invention carries out brightness according to the monochrome information and distance parameter of splicing regions to splicing regions Adjustment includes:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image region Center point P 2 distance be D1(x, y), its brightness adjustment is:
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image region Center point P 2 distance be D2(x, y), its brightness adjustment is:
Compared with correlation technique, technical scheme of the embodiment of the present invention includes:Calculate the first visual angle that binocular camera is obtained The depth value of each pixel in image and the second multi-view image;By the when the depth value obtained according to calculating determines to be spliced Three splicing regions of one multi-view image and the second multi-view image composition;According to the monochrome information and distance parameter pair of splicing regions Splicing regions carry out brightness adjustment, and generate shooting image according to the brightness value after adjustment and generated according to the brightness value after adjustment Stitching image;Wherein, three splicing regions include the first stitching image region of stitching image, middle stitching image region, the Two stitching image regions;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.The present invention Embodiment avoids light and shade change during image synthesis, improves the display quality of stitching image, improves the use body of user Test.
Fig. 8 is the structured flowchart of mobile terminal of the embodiment of the present invention, as shown in figure 8, including:
First camera, is configured to shoot the first multi-view image;
Second camera, is configured to shoot the second multi-view image;
Be stored with the memory of picture processing program;
Processor, is configured to perform image processing program to perform operations described below:
Calculate the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition;
The depth value obtained according to calculating determines what is be made up of when being spliced the first multi-view image and the second multi-view image Three splicing regions;
According to the monochrome information and distance parameter of splicing regions to splicing regions carry out brightness adjustment, and according to adjustment after Brightness value generates shooting image and generates stitching image according to the brightness value after adjustment;
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
Optionally, the multi-view image of the embodiment of the present invention first is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, Processor, is configured to perform image processing program to calculate the first multi-view image and the second multi-view image of binocular camera acquisition In the depth value of each pixel include:
To each pixel in the first multi-view image, searched for and the picture from the second multi-view image by image matching technology The match point of vegetarian refreshments matching, the depth value of the pixel is calculated according to triangulation technique.
Optionally, the multi-view image of the embodiment of the present invention first is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, First stitching image region is left side stitching image region, and the second stitching image region is right side stitching image region;Processor, It is configured to perform image processing program with according to calculating when the depth value obtained determines to be spliced by the first multi-view image and the Three splicing regions of two multi-view images composition include:
By the left boundary of the pixel-map of the right area of LOOK LEFT image to right side stitching image region;By LOOK RIGHT The right border of the pixel-map of the coordinates regional of image to left side stitching image region;
Wherein, the pixel of the right area of LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to right side The pixel coordinate x1=x-Dl (x, y) of the left boundary in stitching image region, y1=y;The picture of the left area of LOOK RIGHT image Element is Pr (x, y), when depth value is Dr (x, y), is mapped to the pixel coordinate x2=x on the right border in left side stitching image region + Dr (x, y), y2=y.
Optionally, monochrome information of the embodiment of the present invention includes the pixel intensity average M1 in the first stitching image region, centre The pixel intensity average M2 in stitching image region, the pixel intensity average M3 in the second stitching image region, distance parameter include three The center point P 1 in the first stitching image region of individual splicing regions, the center point P 2 in middle stitching image region, the second spliced map As the center point P 3 in region, the space length D12 of center point P 1 and P2, center point P 2 and P3 space length D23;Processor, It is configured to perform image processing program with the monochrome information and distance parameter according to splicing regions to splicing regions progress brightness tune It is whole including:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image region Center point P 2 distance be D1(x, y), its brightness adjustment is:
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image region Center point P 2 distance be D2(x, y), its brightness adjustment is:
Compared with correlation technique, technical scheme of the embodiment of the present invention includes:Calculate the first visual angle that binocular camera is obtained The depth value of each pixel in image and the second multi-view image;By the when the depth value obtained according to calculating determines to be spliced Three splicing regions of one multi-view image and the second multi-view image composition;According to the monochrome information and distance parameter pair of splicing regions Splicing regions carry out brightness adjustment, and generate shooting image according to the brightness value after adjustment and generated according to the brightness value after adjustment Stitching image;Wherein, three splicing regions include the first stitching image region of stitching image, middle stitching image region, the Two stitching image regions;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.The present invention Embodiment avoids light and shade change during image synthesis, improves the display quality of stitching image, improves the use body of user Test.
The embodiment of the present invention also provides a kind of computer-readable recording medium, and computer-readable recording medium storage has one Or multiple programs, one or more of programs can realize following steps by one or more computing device:
Calculate the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition;
The depth value obtained according to calculating determines what is be made up of when being spliced the first multi-view image and the second multi-view image Three splicing regions;
According to the monochrome information and distance parameter of splicing regions to splicing regions carry out brightness adjustment, and according to adjustment after Brightness value generates shooting image and generates stitching image according to the brightness value after adjustment;
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
Optionally, the multi-view image of the embodiment of the present invention first is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, One or more program can be realized by one or more computing device calculates the first visual angle that binocular camera is obtained The depth value of each pixel includes in image and the second multi-view image:
To each pixel in the first multi-view image, searched for and the picture from the second multi-view image by image matching technology The match point of vegetarian refreshments matching, the depth value of the pixel is calculated according to triangulation technique.
Optionally, the multi-view image of the embodiment of the present invention first is LOOK LEFT image, and the second multi-view image is LOOK RIGHT image, First stitching image region is left side stitching image region, and the second stitching image region is right side stitching image region;One or When the multiple programs of person can be realized that the depth value obtained according to calculating determines to be spliced by one or more computing device Three splicing regions being made up of the first multi-view image and the second multi-view image include:
By the left boundary of the pixel-map of the right area of LOOK LEFT image to right side stitching image region;By LOOK RIGHT The right border of the pixel-map of the coordinates regional of image to left side stitching image region;
Wherein, the pixel of the right area of LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to right side The pixel coordinate x1=x-Dl (x, y) of the left boundary in stitching image region, y1=y;The picture of the left area of LOOK RIGHT image Element is Pr (x, y), when depth value is Dr (x, y), is mapped to the pixel coordinate x2=x on the right border in left side stitching image region + Dr (x, y), y2=y.
Optionally, monochrome information of the embodiment of the present invention includes the pixel intensity average M1 in the first stitching image region, centre The pixel intensity average M2 in stitching image region, the pixel intensity average M3 in the second stitching image region, distance parameter include three The center point P 1 in the first stitching image region of individual splicing regions, the center point P 2 in middle stitching image region, the second spliced map As the center point P 3 in region, the space length D12 of center point P 1 and P2, center point P 2 and P3 space length D23;One or Multiple programs can be realized monochrome information and distance parameter according to splicing regions to spelling by one or more computing device Connecing region progress brightness adjustment includes:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image region Center point P 2 distance be D1(x, y), its brightness adjustment is:
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image region Center point P 2 distance be D2(x, y), its brightness adjustment is:
Compared with correlation technique, technical scheme of the embodiment of the present invention includes:Calculate the first visual angle that binocular camera is obtained The depth value of each pixel in image and the second multi-view image;By the when the depth value obtained according to calculating determines to be spliced Three splicing regions of one multi-view image and the second multi-view image composition;According to the monochrome information and distance parameter pair of splicing regions Splicing regions carry out brightness adjustment, and generate shooting image according to the brightness value after adjustment and generated according to the brightness value after adjustment Stitching image;Wherein, three splicing regions include the first stitching image region of stitching image, middle stitching image region, the Two stitching image regions;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.The present invention Embodiment avoids light and shade change during image synthesis, improves the display quality of stitching image, improves the use body of user Test.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and And also including other key elements being not expressly set out, or also include for this process, method, article or device institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Also there is other identical element in process, method, article or the device of key element.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Understood based on such, technical scheme is substantially done to prior art in other words Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions are to cause a station terminal (can be mobile phone, computer, service Device, air conditioner, or network equipment etc.) perform method described in each of the invention embodiment.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot Form, these are belonged within the protection of the present invention.

Claims (10)

1. a kind of method for realizing image procossing, it is characterised in that including:
Calculate the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition;
The depth value obtained according to calculating determines three be made up of when being spliced the first multi-view image and the second multi-view image Splicing regions;
Brightness adjustment is carried out to splicing regions according to the monochrome information and distance parameter of splicing regions, and according to the brightness after adjustment Value generation shooting image simultaneously generates stitching image according to the brightness value after adjustment;
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
2. according to the method described in claim 1, it is characterised in that first multi-view image is LOOK LEFT image, described the Two multi-view images are LOOK RIGHT image, each in the first multi-view image and the second multi-view image that the calculating binocular camera is obtained The depth value of individual pixel includes:
To each pixel in first multi-view image, searched for and the picture from the second multi-view image by image matching technology The match point of vegetarian refreshments matching, the depth value of the pixel is calculated according to triangulation technique.
3. according to the method described in claim 1, it is characterised in that first multi-view image is LOOK LEFT image, described the Two multi-view images are LOOK RIGHT image, and the first stitching image region is left side stitching image region, second spliced map As region is right side stitching image region;It is described according to calculate obtain depth value determine to be spliced when by the first multi-view image Three splicing regions constituted with the second multi-view image include:
By the left boundary of the pixel-map of the right area of the LOOK LEFT image to the right side stitching image region;By institute State LOOK RIGHT image coordinates regional pixel-map to the left side stitching image region the right border;
Wherein, the pixel of the right area of the LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to described The pixel coordinate x1=x-Dl (x, y) of the left boundary in right side stitching image region, y1=y;The left side of the LOOK RIGHT image The pixel in region is Pr (x, y), when depth value is Dr (x, y), is mapped to the right border in the left side stitching image region Pixel coordinate x2=x+Dr (x, y), y2=y.
4. the method according to any one of claims 1 to 3, it is characterised in that the monochrome information includes the first spliced map Pixel intensity average M1, pixel intensity average M2, the picture in the second stitching image region in middle stitching image region as region Plain luminance mean value M3, the distance parameter include the first stitching image region of three splicing regions center point P 1, in Between the center point P 2 in stitching image region, the center point P 3 in the second stitching image region, center point P 1 and P2 space length D12, center point P 2 and P3 space length D23.
5. method according to claim 4, it is characterised in that the monochrome information and distance parameter according to splicing regions Carrying out brightness adjustment to splicing regions includes:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image region The distance of center point P 2 is D1(x, y), its brightness adjustment is:
<mrow> <msub> <mi>P</mi> <mrow> <mi>T</mi> <mi>P</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>P</mi> <mrow> <mi>P</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <mi>M</mi> <mn>1</mn> </mrow> <mrow> <mi>M</mi> <mn>2</mn> </mrow> </mfrac> <mo>*</mo> <mfrac> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>D</mi> <mn>12</mn> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image region The distance of center point P 2 is D2(x, y), its brightness adjustment is:
<mrow> <msub> <mi>P</mi> <mrow> <mi>T</mi> <mi>P</mi> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>P</mi> <mrow> <mi>P</mi> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <mi>M</mi> <mn>2</mn> </mrow> <mrow> <mi>M</mi> <mn>3</mn> </mrow> </mfrac> <mo>*</mo> <mfrac> <mrow> <msub> <mi>D</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>D</mi> <mn>23</mn> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
6. a kind of mobile terminal, it is characterised in that including:
First camera, is configured to shoot the first multi-view image;
Second camera, is configured to shoot the second multi-view image;
Be stored with the memory of picture processing program;
Processor, is configured to perform described image processing routine to perform operations described below:
Calculate the depth value of each pixel in the first multi-view image and the second multi-view image of binocular camera acquisition;
The depth value obtained according to calculating determines three be made up of when being spliced the first multi-view image and the second multi-view image Splicing regions;
Brightness adjustment is carried out to splicing regions according to the monochrome information and distance parameter of splicing regions, and according to the brightness after adjustment Value generation shooting image simultaneously generates stitching image according to the brightness value after adjustment;
Wherein, three splicing regions include the first stitching image region of stitching image, centre stitching image region, second Stitching image region;Second visual pattern is the image of the camera acquisition of close flash lamp in binocular camera.
7. mobile terminal according to claim 6, it is characterised in that first multi-view image is LOOK LEFT image, institute The second multi-view image is stated for LOOK RIGHT image, the processor is configured to execution described image processing routine and taken the photograph to calculate binocular As the depth value of each pixel includes in the first multi-view image and the second multi-view image that head is obtained:
To each pixel in first multi-view image, searched for and the picture from the second multi-view image by image matching technology The match point of vegetarian refreshments matching, the depth value of the pixel is calculated according to triangulation technique.
8. mobile terminal according to claim 6, it is characterised in that first multi-view image is LOOK LEFT image, institute The second multi-view image is stated for LOOK RIGHT image, the first stitching image region is left side stitching image region, described second spells Image-region is connect for right side stitching image region;The processor, is configured to perform described image processing routine with according to calculating The depth value of acquisition determines that three splicing regions being made up of when being spliced the first multi-view image and the second multi-view image include:
By the left boundary of the pixel-map of the right area of the LOOK LEFT image to the right side stitching image region;By institute State LOOK RIGHT image coordinates regional pixel-map to the left side stitching image region the right border;
Wherein, the pixel of the right area of the LOOK LEFT image is Pl (x, y), when depth value is Dl (x, y), is mapped to described The pixel coordinate x1=x-Dl (x, y) of the left boundary in right side stitching image region, y1=y;The left side of the LOOK RIGHT image The pixel in region is Pr (x, y), when depth value is Dr (x, y), is mapped to the right border in the left side stitching image region Pixel coordinate x2=x+Dr (x, y), y2=y.
9. the mobile terminal according to any one of claim 6~8, it is characterised in that the monochrome information includes first and spelled Meet the pixel intensity average M1, the pixel intensity average M2 in middle stitching image region, the second stitching image region of image-region Pixel intensity average M3, the distance parameter includes the central point in the first stitching image region of three splicing regions P1, the center point P 2 in middle stitching image region, the center point P 3 in the second stitching image region, center point P 1 and P2 space away from Space length D23 from D12, center point P 2 and P3;The processor, is configured to perform described image processing routine with according to spelling Connect the monochrome information and distance parameter in region includes to splicing regions progress brightness adjustment:
To each pixel P in the first stitching image regionP1(x, y), pixel PP1(x, y) arrives middle stitching image region The distance of center point P 2 is D1(x, y), its brightness adjustment is:
<mrow> <msub> <mi>P</mi> <mrow> <mi>T</mi> <mi>P</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>P</mi> <mrow> <mi>P</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <mi>M</mi> <mn>1</mn> </mrow> <mrow> <mi>M</mi> <mn>2</mn> </mrow> </mfrac> <mo>*</mo> <mfrac> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>D</mi> <mn>12</mn> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
To each pixel P in the second stitching image regionP2(x, y), pixel PP2(x, y) arrives middle stitching image region The distance of center point P 2 is D2(x, y), its brightness adjustment is:
<mrow> <msub> <mi>P</mi> <mrow> <mi>T</mi> <mi>P</mi> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>P</mi> <mrow> <mi>P</mi> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <mi>M</mi> <mn>2</mn> </mrow> <mrow> <mi>M</mi> <mn>3</mn> </mrow> </mfrac> <mo>*</mo> <mfrac> <mrow> <msub> <mi>D</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>D</mi> <mn>23</mn> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
10. a kind of computer-readable recording medium, it is characterised in that the computer-readable recording medium storage have one or Multiple programs, one or more of programs can be realized by one or more computing device such as Claims 1 to 5 institute The step of method for the image procossing stated.
CN201710286268.XA 2017-04-27 2017-04-27 Method for realizing image processing and mobile terminal Active CN107248137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710286268.XA CN107248137B (en) 2017-04-27 2017-04-27 Method for realizing image processing and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710286268.XA CN107248137B (en) 2017-04-27 2017-04-27 Method for realizing image processing and mobile terminal

Publications (2)

Publication Number Publication Date
CN107248137A true CN107248137A (en) 2017-10-13
CN107248137B CN107248137B (en) 2021-01-15

Family

ID=60016424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710286268.XA Active CN107248137B (en) 2017-04-27 2017-04-27 Method for realizing image processing and mobile terminal

Country Status (1)

Country Link
CN (1) CN107248137B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076291A (en) * 2017-12-28 2018-05-25 北京安云世纪科技有限公司 Virtualization processing method, device and the mobile terminal of a kind of image data
CN108898171A (en) * 2018-06-20 2018-11-27 深圳市易成自动驾驶技术有限公司 Recognition processing method, system and computer readable storage medium
CN110377259A (en) * 2019-07-19 2019-10-25 深圳前海达闼云端智能科技有限公司 A kind of hidden method of equipment, electronic equipment and storage medium
CN110599436A (en) * 2019-09-24 2019-12-20 北京凌云天润智能科技有限公司 Binocular image splicing and fusing algorithm
CN110942023A (en) * 2019-11-25 2020-03-31 鹰驾科技(深圳)有限公司 Indication method, device and equipment for vehicle vision blind area and storage medium
CN113077387A (en) * 2021-04-14 2021-07-06 杭州海康威视数字技术股份有限公司 Image processing method and device
CN115861079A (en) * 2023-02-24 2023-03-28 和普威视光电股份有限公司 Panoramic image splicing method and system without overlapping area and splicing terminal

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339062A (en) * 2011-07-11 2012-02-01 西北农林科技大学 Navigation and remote monitoring system for miniature agricultural machine based on DSP (Digital Signal Processor) and binocular vision
CN103369342A (en) * 2013-08-05 2013-10-23 重庆大学 Method for inpainting and restoring processing of vacancy of DIBR (Depth Image Based Rendering) target image
US20140267593A1 (en) * 2013-03-14 2014-09-18 Snu R&Db Foundation Method for processing image and electronic device thereof
CN104125445A (en) * 2013-04-25 2014-10-29 奇景光电股份有限公司 Image depth-of-field adjusting device and image depth-of-field adjusting method
CN104252706A (en) * 2013-06-27 2014-12-31 株式会社理光 Method and system for detecting specific plane
CN104767986A (en) * 2014-01-02 2015-07-08 财团法人工业技术研究院 Depth of Field (DOF) image correction method and system
CN104808795A (en) * 2015-04-29 2015-07-29 王子川 Gesture recognition method for reality-augmented eyeglasses and reality-augmented eyeglasses system
CN105096314A (en) * 2015-06-19 2015-11-25 西安电子科技大学 Binary grid template-based method for obtaining structured light dynamic scene depth
CN105321151A (en) * 2015-10-27 2016-02-10 Tcl集团股份有限公司 Panorama stitching brightness equalization method and system
CN105635602A (en) * 2015-12-31 2016-06-01 天津大学 System for mosaicing videos by adopting brightness and color cast between two videos and adjustment method thereof
CN105869119A (en) * 2016-05-06 2016-08-17 安徽伟合电子科技有限公司 Dynamic video acquisition method
CN105933695A (en) * 2016-06-29 2016-09-07 深圳市优象计算技术有限公司 Panoramic camera imaging device and method based on high-speed interconnection of multiple GPUs
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106303283A (en) * 2016-08-15 2017-01-04 Tcl集团股份有限公司 A kind of panoramic image synthesis method based on fish-eye camera and system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339062A (en) * 2011-07-11 2012-02-01 西北农林科技大学 Navigation and remote monitoring system for miniature agricultural machine based on DSP (Digital Signal Processor) and binocular vision
US20140267593A1 (en) * 2013-03-14 2014-09-18 Snu R&Db Foundation Method for processing image and electronic device thereof
CN104125445A (en) * 2013-04-25 2014-10-29 奇景光电股份有限公司 Image depth-of-field adjusting device and image depth-of-field adjusting method
CN104252706A (en) * 2013-06-27 2014-12-31 株式会社理光 Method and system for detecting specific plane
CN103369342A (en) * 2013-08-05 2013-10-23 重庆大学 Method for inpainting and restoring processing of vacancy of DIBR (Depth Image Based Rendering) target image
CN104767986A (en) * 2014-01-02 2015-07-08 财团法人工业技术研究院 Depth of Field (DOF) image correction method and system
CN104808795A (en) * 2015-04-29 2015-07-29 王子川 Gesture recognition method for reality-augmented eyeglasses and reality-augmented eyeglasses system
CN105096314A (en) * 2015-06-19 2015-11-25 西安电子科技大学 Binary grid template-based method for obtaining structured light dynamic scene depth
CN105321151A (en) * 2015-10-27 2016-02-10 Tcl集团股份有限公司 Panorama stitching brightness equalization method and system
CN105635602A (en) * 2015-12-31 2016-06-01 天津大学 System for mosaicing videos by adopting brightness and color cast between two videos and adjustment method thereof
CN105869119A (en) * 2016-05-06 2016-08-17 安徽伟合电子科技有限公司 Dynamic video acquisition method
CN105933695A (en) * 2016-06-29 2016-09-07 深圳市优象计算技术有限公司 Panoramic camera imaging device and method based on high-speed interconnection of multiple GPUs
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106303283A (en) * 2016-08-15 2017-01-04 Tcl集团股份有限公司 A kind of panoramic image synthesis method based on fish-eye camera and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076291A (en) * 2017-12-28 2018-05-25 北京安云世纪科技有限公司 Virtualization processing method, device and the mobile terminal of a kind of image data
CN108898171A (en) * 2018-06-20 2018-11-27 深圳市易成自动驾驶技术有限公司 Recognition processing method, system and computer readable storage medium
CN108898171B (en) * 2018-06-20 2022-07-22 深圳市易成自动驾驶技术有限公司 Image recognition processing method, system and computer readable storage medium
CN110377259A (en) * 2019-07-19 2019-10-25 深圳前海达闼云端智能科技有限公司 A kind of hidden method of equipment, electronic equipment and storage medium
CN110599436A (en) * 2019-09-24 2019-12-20 北京凌云天润智能科技有限公司 Binocular image splicing and fusing algorithm
CN110942023A (en) * 2019-11-25 2020-03-31 鹰驾科技(深圳)有限公司 Indication method, device and equipment for vehicle vision blind area and storage medium
CN113077387A (en) * 2021-04-14 2021-07-06 杭州海康威视数字技术股份有限公司 Image processing method and device
CN115861079A (en) * 2023-02-24 2023-03-28 和普威视光电股份有限公司 Panoramic image splicing method and system without overlapping area and splicing terminal

Also Published As

Publication number Publication date
CN107248137B (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN107248137A (en) A kind of method and mobile terminal for realizing image procossing
CN107145293A (en) A kind of screenshot method, mobile terminal and storage medium
CN107786732A (en) Terminal applies method for pushing, mobile terminal and computer-readable recording medium
CN107093418A (en) A kind of screen display method, computer equipment and storage medium
CN108270919A (en) A kind of terminal brightness adjusting method, terminal and computer readable storage medium
CN106953684A (en) A kind of method for searching star, mobile terminal and computer-readable recording medium
CN107197094A (en) One kind shooting display methods, terminal and computer-readable recording medium
CN107229389A (en) A kind of method of shared file, equipment and computer-readable recording medium
CN107948430A (en) A kind of display control method, mobile terminal and computer-readable recording medium
CN107153500A (en) It is a kind of to realize the method and apparatus that image is shown
CN107239205A (en) A kind of photographic method, mobile terminal and storage medium
CN107463324A (en) A kind of image display method, mobile terminal and computer-readable recording medium
CN107730433A (en) One kind shooting processing method, terminal and computer-readable recording medium
CN107103581A (en) A kind of image inverted image processing method, device and computer-readable medium
CN108536382A (en) A kind of content display method, wearable terminal and storage medium
CN107295269A (en) A kind of light measuring method and terminal, computer-readable storage medium
CN107240072A (en) A kind of screen luminance adjustment method, terminal and computer-readable recording medium
CN108008991A (en) A kind of image processing method, terminal and computer-readable recording medium
CN107800879A (en) A kind of audio regulation method, terminal and computer-readable recording medium
CN107196384A (en) A kind of charging method, wireless charging device and computer-readable recording medium
CN109799912A (en) A kind of display control method, equipment and computer readable storage medium
CN109193975A (en) A kind of wireless charging device and terminal
CN107203986A (en) A kind of image interfusion method, device and computer-readable recording medium
CN107870722A (en) Document transmission method, mobile terminal and the computer-readable recording medium of terminal room
CN107273024A (en) A kind of method and apparatus for realizing application data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant