CN108184075A - For generating the method and apparatus of image - Google Patents
For generating the method and apparatus of image Download PDFInfo
- Publication number
- CN108184075A CN108184075A CN201810045180.3A CN201810045180A CN108184075A CN 108184075 A CN108184075 A CN 108184075A CN 201810045180 A CN201810045180 A CN 201810045180A CN 108184075 A CN108184075 A CN 108184075A
- Authority
- CN
- China
- Prior art keywords
- image
- image block
- subregion
- block
- capture apparatus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for generating image.One specific embodiment of this method includes:Target scene is exposed simultaneously by least two capture apparatus and obtains at least two images, the calibrating parameters of at least two capture apparatus are identical, and the brightness of image of different capture apparatus shooting is different;It determines the overlapping region of at least two images, overlapping region is divided into multiple subregions according to pre-set dimension;For each image at least two images, image block corresponding with each sub-regions is determined;For every sub-regions in multiple subregions, the image block of weighted value maximum is selected from image block corresponding with the subregion, weighted value is determined based on the noise of image block and the color value of Color Channel;Image is generated based on selected image block.The embodiment can obtain the high dynamic range images shot in moving object or motion process, without generating smear.
Description
Technical field
The invention relates to field of computer technology, and in particular to Internet technical field, it is more particularly, to raw
Into the method and apparatus of image.
Background technology
Shooting dark space and the simultaneous scene in clear zone, the tolerance (camera of general camera are frequently necessary in daily life
The ability of the most bright and most dark details of record and level) it is relatively low, (dynamic range represents a photograph to the dynamic range of the photo of shooting
Piece is from most bright to most dark range) smaller (for example, 70db or so), there may be clear zone is excessively bright or dark space is excessively dark for common photo
The problem of, it is difficult to while the clearly details of displaying dark space and the details in clear zone.
In this case, the high dynamic range images with more wide dynamic range come into being.Compared with normal image,
High dynamic range images can provide more image details, can more truly restore the shadow effect of approaching to reality scene
Fruit.
Invention content
The embodiment of the present application proposes the method and apparatus for generating image.
In a first aspect, the embodiment of the present application provides a kind of method for generating image, this method includes:By at least
Two capture apparatus expose target scene simultaneously obtains at least two images, the calibrating parameters phase of at least two capture apparatus
Together, the brightness of image of different capture apparatus shooting is different;Determine the overlapping region of at least two images, by overlapping region according to
Pre-set dimension is divided into multiple subregions;For each image at least two images, determine corresponding with each sub-regions
Image block;For every sub-regions in multiple subregions, weighted value is selected most from image block corresponding with the subregion
Big image block, weighted value are determined based on the noise of image block and the color value of Color Channel;It is generated based on selected image block
Image.
In some embodiments, the image block of weighted value maximum is selected from image block corresponding with the subregion, including:
For image corresponding with the subregion each image block in the block, which is converted into gray-scale figure, to the ash converted
Rank figure carries out noise filtering and obtains factor I;The color value of the red channel of the image block, green channel and blue channel is obtained,
The standard deviation of the color value of three Color Channels is determined as to the factor Ⅱ of the image block;By multiplying for factor I and factor Ⅱ
Product is determined as the weighted value of the image block;The image block of weighted value maximum in selection image block corresponding with the subregion.
In some embodiments, at least two capture apparatus linearly arrange, and the brightness of at least two images is according to mesh
Mark the ambient brightness setting of scene.
In some embodiments, the overlapping region of at least two images is determined, including:Extract the feature of at least two images
Point matches the characteristic point of at least two images;Displacement between at least two images is determined based on matched characteristic point,
Obtain the overlapping region of at least two images.
In some embodiments, this method further includes:Recognition of face is carried out to the image generated, generates identification information.
Second aspect, the embodiment of the present application provide a kind of device for being used to generate image, and device includes:Acquiring unit,
It is configured at least two capture apparatus and at least two images is obtained to target scene while exposure, at least two shootings are set
Standby calibrating parameters are identical, and the brightness of image of different capture apparatus shooting is different;Division unit is configured to determine at least two
Overlapping region is divided into multiple subregions by the overlapping region of a image according to pre-set dimension;Determination unit is configured to for extremely
Each image in few two images, determines image block corresponding with each sub-regions;Selecting unit, be configured to for
Every sub-regions in multiple subregions select the image block of weighted value maximum, power from image block corresponding with the subregion
Weight values are determined based on the noise of image block and the color value of Color Channel;Generation unit is configured to based on selected image block
Generate image.
In some embodiments, selecting unit includes:Weight determination module is configured to for corresponding with the subregion
Image each image block in the block, gray-scale figure is converted to by the image block, and carrying out noise filtering to the gray-scale figure converted obtains
Factor I;The color value of the red channel of the image block, green channel and blue channel is obtained, by the color value of three Color Channels
Standard deviation be determined as the factor Ⅱ of the image block;The product of factor I and factor Ⅱ is determined as to the power of the image block
Weight values;Image selection module is configured to select the image block of weighted value maximum in image block corresponding with the subregion.
In some embodiments, at least two capture apparatus linearly arrange, and the brightness of at least two images is according to mesh
Mark the ambient brightness setting of scene.
In some embodiments, division unit includes:Matching module is configured to the feature of at least two images of extraction
Point matches the characteristic point of at least two images;Displacement module is configured to determine at least two based on matched characteristic point
Displacement between a image obtains the overlapping region of at least two images;Division module is configured to overlapping region according to pre-
If size is divided into multiple subregions.
In some embodiments, device further includes:Recognition unit is configured to carry out face knowledge to the image generated
Not, identification information is generated.
Method and apparatus provided by the embodiments of the present application for generating image, by least two capture apparatus to target
Scene expose simultaneously obtain at least two images, then determine at least two images between overlapping region and according to pre-set dimension
The overlapping region of each image is divided into multiple subregions, is then every highest image block of sub-regions right to choose weight values, most
Image is generated based on selected image block afterwards, so as to the high dynamic for obtaining moving object or shooting during the motion
Range image, without generating smear.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart for being used to generate one embodiment of the method for image according to the application;
Fig. 3 is the schematic diagram for being used to generate an application scenarios of the method for image according to the application;
Fig. 4 is the effect diagram using the image of the method generation for being used to generate image of the embodiment of the present application;
Fig. 5 is the structure diagram for being used to generate one embodiment of the device of image according to the application;
Fig. 6 is adapted for the structure diagram of the computer system of the server for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for being used to generate image that can apply the application or the implementation for generating the device of image
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 can include capture apparatus 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can use capture apparatus 101,102,103 to shoot photo, and pass through network 104 and be sent to server 105
Interaction.Capture apparatus 101,102,103 can be the various image capture devices for having camera lens, including but not limited to video camera,
Camera, camera, smart mobile phone, tablet computer etc..
Server 105 can provide the background server supported to generation image, and server 105 can be to the figure of reception
As being handled, and generate image.
It should be noted that generally being held for the method that generates image by server 105 of being provided of the embodiment of the present application
Row, correspondingly, the device for generating image is generally positioned in server 105.
It should be understood that the number of the capture apparatus, network and server in Fig. 1 is only schematical.According to realization need
Will, can have any number of capture apparatus, network and server.
With continued reference to Fig. 2, the flow for being used to generate one embodiment of the method for image according to the application is shown
200.This is used for the method for generating image, includes the following steps:
Step 201, target scene is exposed simultaneously by least two capture apparatus and obtains at least two images, at least two
The calibrating parameters of a capture apparatus are identical, and the brightness of image of different capture apparatus shooting is different.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for image operation thereon
Device) target scene can simultaneously be exposed by wired connection mode or radio connection at least two capture apparatus of acquisition
At least two obtained images obtain three images for example, being exposed simultaneously to target scene by three cameras.Wherein, on
The calibrating parameters for stating at least two capture apparatus are identical, sensor (sensor) parameter including above-mentioned at least two capture apparatus
Identical, lens (lens) parameter is identical so that above-mentioned at least two capture apparatus have identical focal length, identical f-number,
Identical resolution ratio, identical direction etc., the same pixel of at least two image above-mentioned in this way only exist displacement and (are referred to as
Parallax) difference, the efficiency of image procossing and the quality of generation image can be improved.
In addition, brightness is different each other between above-mentioned at least two image, since exposure starting time is identical, can lead to
The control end exposure time is spent to control the brightness of above-mentioned at least two image.In some optional realization methods of the present embodiment
In, the brightness of above-mentioned at least two image is set according to the ambient brightness of target scene.It for example, can be by target scene
Intrinsic brilliance sets the brightness of above-mentioned at least two image as above-mentioned reference brightness.As an example, when using three bats
When taking the photograph equipment and obtaining three images, the brightness of three images be respectively the half of above-mentioned reference brightness, one times and twice.
In the present embodiment, target scene can be static object or environment or the object of movement.In addition,
It can also refer to and be shot in quiescing process or shot during the motion.When the object to movement is shot or
When person is shot during the motion, while exposure may insure that delay is not present between above-mentioned at least two image, so as to
Avoiding the image generated, there are smears.
In order to realize while expose, can be carried out timing or by way of interrupting to above-mentioned at least two capture apparatus
Control can also use other softwares or hardware controls mode to realize while expose that the application is not construed as limiting this.
, would generally be spaced apart between above-mentioned at least two capture apparatus in practice, above-mentioned at least two shooting is set
It is standby to be arranged in various ways.In some optional realization methods of the present embodiment, above-mentioned at least two shooting is set
Standby linear arrangement, for example, in horizontally arranged or vertical arrangement.When above-mentioned at least two capture apparatus linearly arranges, more
Easily determine displacement (parallax) difference between the same pixel of above-mentioned at least two image.
It should be pointed out that above-mentioned radio connection can include but is not limited to 3G/4G connections, WiFi connections, bluetooth
Connection, WiMAX connections, Zigbee connections, UWB (ultra wideband) connections and other currently known or exploitations in the future
Radio connection.
Step 202, it determines the overlapping region of at least two images, overlapping region is divided into multiple sub-districts according to pre-set dimension
Domain.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for image operation thereon
Device) at least two images obtained in step 201 can be detected, determine the overlay region between above-mentioned at least two image
Domain, then according to the size (or resolution ratio) of determining overlapping region according to pre-set dimension (or default resolution ratio) by overlapping region
It is divided into multiple subregions.For example, the resolution ratio of determining overlapping region is 800 × 480, it is 16 × 16 to preset resolution ratio, then
Overlapping region can be divided into 1500 (that is, 50 × 30) sub-regions.
In the present embodiment, the overlapping region between image can be by advance trained machine learning model come really
It is fixed.
Machine learning model can be artificial neural network, it takes out human brain neuroid from information processing angle
As establishing certain naive model, different networks being formed by different connection modes.Usually by a large amount of node (or nerve
Member) between be coupled to each other composition, referred to as a kind of each specific output function of node on behalf, excitation function.Between each two node
Connection all represent one for the weighted value by the connection signal, referred to as weight (be called and do parameter), the output of network
It is then different according to the difference of the connection mode of network, weighted value and excitation function.Machine learning model generally includes multiple layers, often
A layer includes multiple nodes, in general, the weight of the node of same layer can be identical, the weight of the node of different layers can be different,
Therefore multiple layers of machine learning model of parameter can also be different.
Here, electronic equipment can input above-mentioned at least two image from the input side of machine learning model, pass through successively
Cross the processing (such as product, convolution etc.) of the parameter of each layer in machine learning model, and from the outlet side of machine learning model
Output, the information of outlet side output are the overlapping region between above-mentioned at least two image.
Machine learning model can characterize the corresponding pass of the overlapping region between at least two images and at least two images
System, electronic equipment can training machine learning models in several ways.As an example, electronic equipment can obtain first
Training sample set, each sample include at least two images and according to the predetermined overlapping regions of at least two images, then
It, will be predetermined heavy according at least two images in each sample using at least two images in each sample as input
Folded region obtains machine learning model as output training.
In some optional realization methods of the present embodiment, the overlapping region between image can pass through Feature Points Matching
To determine.
Specifically, it is determined that the overlapping region between above-mentioned at least two image can include:
First, the characteristic point of above-mentioned at least two image is extracted, the characteristic point of above-mentioned at least two image is matched.
Characteristic point is also known as point of interest, key point, is prominent in the picture and with some points for representing meaning, putting us by these can
To be used for identifying image, carry out image registration, progress 3D reconstructions etc..The extraction and matching of characteristic point can there are many mode, examples
Such as, using SURF, (Speeded Up Robust Features accelerate robust feature, are a kind of steady local feature region inspections
Survey and description algorithm) above-mentioned at least two image of operator extraction characteristic point and matched.
Then, displacement (parallax) between at least two images is determined based on matched characteristic point, obtains at least two figures
The overlapping region of picture.Since the same pixel of above-mentioned at least two image is there is only displacement (parallax) difference, by
Average calculating operation is done with characteristic point and obtains the occurrence of displacement (parallax), so that it is determined that the overlay region between above-mentioned at least two image
Domain.
It should be noted that when carrying out sub-zone dividing to overlapping region according to pre-set dimension, the size of overlapping region can
There can be the situation that cannot be predetermined size uniform division.For example, pre-set dimension is 16 × 16, and overlapping region size is 600
× 480,600 cannot be divided exactly by 16, may be used at this time and the mode of benefit 0 (or 255) is carried out to overlapping region realize division, such as logical
It crosses benefit 0 and the size of overlapping region is increased to 608 × 480, then after image composition remove the region for mending 0.
Above-mentioned example describes situation of the subregion for square, but this is only schematical.It should be appreciated that subregion
Can be any appropriate shape, for example, rectangle, triangle and other arbitrary suitable shapes, those skilled in the art
Member can be configured according to the needs of practical application scene.
Step 203, for each image at least two images, image corresponding with each sub-regions is determined
Block.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for image operation thereon
Device) can be according to the overlapping region that step 202 determines and the subregion divided, each figure from above-mentioned at least two image
Determined as in corresponding with all subregion image block (that is, corresponding at least two image block per sub-regions, for example, work as it is above-mentioned extremely
When few two images are three images, three image blocks are corresponded to per sub-regions), to carry out subsequent processing.
Step 204, for every sub-regions in multiple subregions, the right to choose from image block corresponding with the subregion
The image block of weight values maximum, weighted value are determined based on the noise of image block and the color value of Color Channel.
In the present embodiment, for every sub-regions in multiple subregions, the method for generating image runs on it
On electronic equipment (such as server shown in FIG. 1) can be based on image block noise and Color Channel color value, determine should
The weighted value of corresponding at least two image block of subregion, the then right to choose from the subregion corresponding at least two image block
The image block of weight values maximum.
In some optional realization methods of the present embodiment, weighted value is selected from image block corresponding with the subregion
Maximum image block, includes the following steps:
The first step, for each image block in image block corresponding with the subregion (for example, three image blocks), by this
Image block is converted to gray-scale figure, and carrying out noise filtering to the gray-scale figure converted obtains factor I;Obtain the red of the image block
The standard deviation of the color value of three Color Channels is determined as the image block by the color value of chrominance channel, green channel and blue channel
Factor Ⅱ;The product of factor I and factor Ⅱ is determined as to the weighted value of the image block.
As an example, factor I C can be determined by formula (1):
Wherein, G be image block gray-scale figure matrix (matrix size determines by pre-set dimension, for example, pre-set dimension 16 ×
16, then G is 16 × 16 grayscale matrix), L is Laplace filter matrix,For gauss low frequency filter matrix, D is figure
As the high frequency section of detailed information that block includes (by being obtained to grayscale matrix G application Laplace filters L).
It should be noted that although the examples discussed show determined by Laplace filter and gauss low frequency filter
Factor I, but the application is not limited to this, and can also use other modes (for example, being removed based on small echo Bayes threshold value
Noise) or other wave filters (for example, based on Butterworth LPF smoothing processing) determine factor I.
As an example, factor Ⅱ S can be determined by formula (2):
Wherein, IRFor the color value of red color channel, IGFor the color value of green color channel, IBFor blue color channels
Color value, IavThe average value of color value for three Color Channels.
Pass through above-mentioned example, it may be determined that the weighted value of the image block is C × S.
Second step selects the image block of weighted value maximum in image block corresponding with the subregion.
Step 205, image is generated based on selected image block.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for image operation thereon
Device) image block that step 204 selects can be carried out splicing fusion to generate image according to the arrangement of subregion.
In the present embodiment, various blending algorithms may be used spliced image is merged to eliminate splicing seams
Gap improves the quality of generation image.As an example, multi-spectrum fusion algorithm may be used to melt spliced image
It closes.Multi-band fusion algorithm is built upon the blending algorithm on the basis of Guassian pyramid transformation, by the way that spliced image is divided
Solution into multiple and different spatial resolutions, different scale sub- stitching image form pyramid, then by each layer pyramid respectively into
Row fusion, the image after finally combination is merged.
In some optional realization methods of the present embodiment, this is used for the method for generating image, further includes to being generated
Image carry out recognition of face, generate identification information.The target field of the method generation for being used to generate image through this embodiment
The high dynamic range images of scape can clearly show the detailed information of clear zone and dark space, therefore, when target scene includes face
When, clearly facial image can be obtained, improves the accuracy rate of recognition of face, so as to be applied to the fields such as security protection, monitoring.
With continued reference to Fig. 3, Fig. 3 is to be illustrated according to the present embodiment for generating one of the application scenarios of the method for image
Figure.In the application scenarios of Fig. 3, first, the identical capture apparatus 301,302 and 303 of calibrating parameters is simultaneously to target scene 304
It is exposed, obtains image 1,2 and of image of brightness different (for example, the half of above-mentioned reference brightness, one times and twice)
Image 3, and it is sent to server 305;Then, server 305 is detected the image 1, image 2 and image 3 of acquisition, determines
Overlapping region (the solid-line rectangle frame in such as Fig. 3) between image 1, image 2 and image 3, then according to pre-set dimension (for example,
16 × 16) overlapping region is divided into multiple subregions (dotted rectangle in such as Fig. 3);Then, server 305 determines successively
Image 1, image 2 and image 3 image block corresponding with per sub-regions, for example, the subregion positioned at the first row first row corresponds to
The subregion A of image 111, image 2 subregion B11With the subregion C of image 311, positioned at the subregion pair of the first row secondary series
Answer the subregion A of image 112, image 2 subregion B12With the subregion C of image 312... ..., and so on;Later, server
305 determine the weight per the corresponding image block of sub-regions according to the color value of the noise of image block and three Color Channels successively
Value, and the highest image block of weighted value is selected, for example, the subregion for being located at the first row first row, calculates image block respectively
A11、B11And C11Weighted value, then select weighted value highest image block A11, similarly for the son positioned at the first row secondary series
The highest image block B of regional choice weighted value12Deng;Finally, server 305 generates figure according to the image block that every sub-regions select
Picture.
With continued reference to Fig. 4, it illustrates application the embodiment of the present application for generating the image of the method for image generation
Effect diagram.Image 401,402 and 403 is that three capture apparatus expose acquisition to target scene simultaneously during the motion
The image of different brightness, image 404 are above-mentioned electronic equipment (for example, server 305 shown in Fig. 3) according to image 401,402
With the image of 403 synthesis.It can be seen from the figure that less than above-mentioned reference brightness (for example, half of above-mentioned reference brightness)
The detailed information of dark space of image 401 cannot clearly show, but can clearly show the detailed information in clear zone;With it is upper
The detailed information of dark space and clear zone for stating the comparable image 402 of reference brightness cannot clearly be shown, but can clearly be opened up
Show the detailed information in transitional region (region between clear zone and dark space);And higher than above-mentioned reference brightness (for example, above-mentioned ginseng
Examine brightness twice) the detailed information in clear zone of image 403 cannot clearly show, but it will be apparent that ground displaying dark space
Detailed information.It is capable of providing more by application the embodiment of the present application for producing the image 404 that the method for image generates
Detailed information clearly shows the detailed information of clear zone, dark space and transitional region, restores and approach macroscopic true field
The effect of shadow of scape, and since image 401,402 and 403 is to expose simultaneously, image 404 will not generate smear, image
It is high-quality.
Above-described embodiment of the application provide for generate the method for image by calibrating parameters it is identical at least two
Capture apparatus exposes target scene simultaneously, obtains different at least two images of brightness, so as to obtain moving object or
The high dynamic range images shot during the motion, without generating smear.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating figure
One embodiment of the device of picture, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in server.
As shown in figure 5, the present embodiment includes for generating the device 500 of image:Acquiring unit 501, division unit
502nd, determination unit 503, selecting unit 504 and generation unit 505.Wherein, acquiring unit 501 is configured at least two
Capture apparatus exposes target scene simultaneously obtains at least two images, and the calibrating parameters of at least two capture apparatus are identical, no
The brightness of image of same capture apparatus shooting is different;Division unit 502 is configured to determine the overlapping region of at least two images,
Overlapping region is divided into multiple subregions according to pre-set dimension;Determination unit 503 is configured at least two images
Each image determines image block corresponding with each sub-regions;Selecting unit 504 is configured in multiple subregions
Every sub-regions, the image block of weighted value maximum is selected from image block corresponding with the subregion, weighted value is based on image
The noise of block and the color value of Color Channel determine;And generation unit 505 is configured to generate image based on selected image block.
In the present embodiment, for generate the device 500 of image acquiring unit 501 can by wired connection mode or
Person's radio connection obtains at least two capture apparatus and exposes at least two obtained images simultaneously to target scene, for example,
Target scene is exposed simultaneously by three cameras and obtains three images.Wherein, the calibration of above-mentioned at least two capture apparatus
Parameter is identical, and sensor (sensor) parameter including above-mentioned at least two capture apparatus is identical, lens (lens) parameter is identical,
So that above-mentioned at least two capture apparatus has identical focal length, identical f-number, identical resolution ratio, identical direction
Deng the same pixel of at least two image above-mentioned in this way only exists displacement (parallax) difference, can improve the efficiency of image procossing
With the quality of generation image.
In some optional realization methods of the present embodiment, the brightness of above-mentioned at least two image is according to target scene
Ambient brightness setting.
In some optional realization methods of the present embodiment, above-mentioned at least two capture apparatus linearly arranges, for example,
In horizontally arranged or vertical arrangement.
In the present embodiment, above-mentioned division unit 502 can carry out at least two images obtained in acquiring unit 501
Detection, determines the overlapping region between above-mentioned at least two image, then (or is differentiated according to the size of determining overlapping region
Rate) according to pre-set dimension (or default resolution ratio) overlapping region is divided into multiple subregions.For example, determining overlapping region
Resolution ratio is 800 × 480, and it is 16 × 16 to preset resolution ratio, then overlapping region can be divided into 1500 (that is, 50 × 30) height
Region.
In some optional realization methods of the present embodiment, above-mentioned division unit 502 includes matching module, displacement module
And division module.Wherein, matching module is configured to the characteristic point of at least two images of extraction, by the feature of at least two images
Point is matched;Displacement module is configured to determine displacement between at least two images based on matched characteristic point, obtain to
The overlapping region of few two images;Division module is configured to overlapping region being divided into multiple subregions according to pre-set dimension.
In the present embodiment, the overlapping region and divided that above-mentioned determination unit 503 can be determined according to division unit 502
Subregion, determine corresponding with all subregion image block (that is, often height from each image in above-mentioned at least two image
Region corresponds at least two image blocks, for example, when above-mentioned at least two image is three images, three are corresponded to per sub-regions
Image block), to carry out subsequent processing.
In the present embodiment, for every sub-regions in multiple subregions, above-mentioned selecting unit 504 can be based on image
The noise of block and the color value of Color Channel determine the weighted value of corresponding at least two image block of the subregion, then from the son
The image block of weighted value maximum is selected in corresponding at least two image block in region.
In some optional realization methods of the present embodiment, above-mentioned selecting unit 504 includes weight determination module and figure
As selecting module.Wherein, weight determination module is configured to for image corresponding with the subregion each image block in the block,
The image block is converted into gray-scale figure, carrying out noise filtering to the gray-scale figure converted obtains factor I;Obtain the image block
Red channel, green channel and blue channel color value, the standard deviation of the color value of three Color Channels is determined as the image
The factor Ⅱ of block;The product of factor I and factor Ⅱ is determined as to the weighted value of the image block;Image selection module is matched
Put the image block for selecting weighted value maximum in image block corresponding with the subregion.
In some optional realization methods of the present embodiment, which further includes recognition unit.
Wherein, identification cell configuration is used to carry out recognition of face to the image generated, generates identification information.Use through this embodiment
In the high dynamic range images of the target scene of the device generation of generation image, it can clearly show the details of clear zone and dark space
Therefore information, when target scene includes face, can obtain clearly facial image, improve the accuracy rate of recognition of face, from
And it can be applied to the fields such as security protection, monitoring.
Above-described embodiment of the application provide for generate the device of image by calibrating parameters it is identical at least two
Capture apparatus exposes target scene simultaneously, obtains different at least two images of brightness, so as to obtain moving object or
The high dynamic range images shot in motion process, without generating smear.
Below with reference to Fig. 6, it illustrates suitable for being used for realizing the computer system 600 of the server of the embodiment of the present application
Structure diagram.Server shown in Fig. 6 is only an example, should not be to the function of the embodiment of the present application and use scope band
Carry out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into program in random access storage device (RAM) 603 from storage section 608 and
Perform various appropriate actions and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interfaces 605 are connected to lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net performs communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable medium
On computer program, which includes for the program code of the method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609 and/or from detachable media
611 are mounted.When the computer program is performed by central processing unit (CPU) 601, perform what is limited in the present processes
Above-mentioned function.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not
It is limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer readable storage medium can any include or store journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media can include in a base band or as a carrier wave part propagation data-signal,
Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but it is unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By instruction execution system, device either device use or program in connection.It is included on computer-readable medium
Program code can be transmitted with any appropriate medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
Can with one or more programming language or combinations come write for perform the application operation calculating
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to perform on the user computer, partly perform, performed as an independent software package on the user computer,
Part performs or performs on a remote computer or server completely on the remote computer on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN)
Or wide area network (WAN)-be connected to subscriber computer or, it may be connected to outer computer (such as utilizes Internet service
Provider passes through Internet connection).
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that it in some implementations as replacements, is marked in box
The function of note can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor packet
Include acquiring unit, division unit, determination unit, selecting unit and generation unit.Wherein, the title of these units is in certain situation
Under do not form restriction to the unit in itself, for example, acquiring unit is also described as " by least two capture apparatus
Target scene is exposed simultaneously and obtains the unit of at least two images ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:Target scene is exposed simultaneously by least two capture apparatus and obtains at least two images, at least two capture apparatus
Calibrating parameters it is identical, the brightness of image of different capture apparatus shootings is different;Determine the overlapping region of at least two images, it will
Overlapping region is divided into multiple subregions according to pre-set dimension;For each image at least two images, determine with it is each
The corresponding image block of sub-regions;For every sub-regions in multiple subregions, from image block corresponding with the subregion
The image block of weighted value maximum is selected, weighted value is determined based on the noise of image block and the color value of Color Channel;Based on selected
Image block generation image.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the specific combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for arbitrarily combining and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (10)
1. a kind of method for generating image, including:
Target scene is exposed simultaneously by least two capture apparatus and obtains at least two images, at least two shooting is set
Standby calibrating parameters are identical, and the brightness of image of different capture apparatus shooting is different;
It determines the overlapping region of at least two image, the overlapping region is divided into multiple subregions according to pre-set dimension;
For each image at least two image, image block corresponding with each sub-regions is determined;
For every sub-regions in the multiple subregion, select weighted value maximum from image block corresponding with the subregion
Image block, the weighted value determined based on the noise of image block and the color value of Color Channel;
Image is generated based on selected image block.
2. according to the method described in claim 1, wherein, the selection weighted value from image block corresponding with the subregion is most
Big image block, including:
For image corresponding with the subregion each image block in the block, which is converted into gray-scale figure, to being converted
Gray-scale figure carry out noise filtering obtain factor I;Obtain the red channel of the image block, green channel and blue channel
The standard deviation of the color value of three Color Channels is determined as the factor Ⅱ of the image block by color value;By factor I and second because
The product of son is determined as the weighted value of the image block;
The image block of weighted value maximum in selection image block corresponding with the subregion.
3. according to the method described in claim 2, wherein, at least two capture apparatus linearly arranges, described at least two
The brightness of a image is set according to the ambient brightness of the target scene.
4. according to the method described in claim 2, wherein, the overlapping region for determining at least two image, including:
The characteristic point of at least two image is extracted, the characteristic point of at least two image is matched;
Displacement between at least two image is determined based on matched characteristic point, obtains the overlapping of at least two image
Region.
5. according to the method described in claim 1, wherein, the method further includes:
Recognition of face is carried out to the image generated, generates identification information.
6. it is a kind of for generating the device of image, including:
Acquiring unit is configured at least two capture apparatus and obtains at least two images to target scene while exposure,
The calibrating parameters of at least two capture apparatus are identical, and the brightness of image of different capture apparatus shooting is different;
Division unit is configured to determine the overlapping region of at least two image, by the overlapping region according to default ruler
It is very little to be divided into multiple subregions;
Determination unit is configured to for each image at least two image, is determined and each sub-regions pair
The image block answered;
Selecting unit is configured to for every sub-regions in the multiple subregion, from image corresponding with the subregion
The image block of weighted value maximum is selected in block, the weighted value is determined based on the noise of image block and the color value of Color Channel;
Generation unit is configured to generate image based on selected image block.
7. device according to claim 6, wherein, the selecting unit includes:
Weight determination module is configured to for image corresponding with the subregion each image block in the block, by the image block
Gray-scale figure is converted to, carrying out noise filtering to the gray-scale figure converted obtains factor I;Obtain the image block red channel,
The color value of green channel and blue channel, by the standard deviation of the color value of three Color Channels be determined as the second of the image block because
Son;The product of factor I and factor Ⅱ is determined as to the weighted value of the image block;
Image selection module is configured to select the image block of weighted value maximum in image block corresponding with the subregion.
8. device according to claim 6, wherein, described device further includes:
Recognition unit is configured to carry out recognition of face to the image generated, generates identification information.
9. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-5.
10. a kind of computer readable storage medium, is stored thereon with computer program, wherein, described program is executed by processor
Methods of the Shi Shixian as described in any in claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810045180.3A CN108184075B (en) | 2018-01-17 | 2018-01-17 | Method and apparatus for generating image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810045180.3A CN108184075B (en) | 2018-01-17 | 2018-01-17 | Method and apparatus for generating image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108184075A true CN108184075A (en) | 2018-06-19 |
CN108184075B CN108184075B (en) | 2019-05-10 |
Family
ID=62550876
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810045180.3A Active CN108184075B (en) | 2018-01-17 | 2018-01-17 | Method and apparatus for generating image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108184075B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110035237A (en) * | 2019-04-09 | 2019-07-19 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN111259683A (en) * | 2018-11-30 | 2020-06-09 | 中光电智能云服股份有限公司 | Skin detection method and image processing apparatus |
CN111988524A (en) * | 2020-08-21 | 2020-11-24 | 广东电网有限责任公司清远供电局 | Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium |
CN112102307A (en) * | 2020-09-25 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Method and device for determining heat data of global area and storage medium |
CN114827482A (en) * | 2021-01-28 | 2022-07-29 | 北京字节跳动网络技术有限公司 | Image brightness adjusting method and device, electronic equipment and medium |
CN117237177A (en) * | 2023-11-15 | 2023-12-15 | 杭州海康威视数字技术股份有限公司 | Watermark processing method and device and electronic equipment |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0982938A1 (en) * | 1998-08-28 | 2000-03-01 | Olympus Optical Co., Ltd. | Electronic camera |
US20070236595A1 (en) * | 2006-04-10 | 2007-10-11 | Sony Taiwan Limited. | Method for Improving Image Stitching Accuracy with Lens Distortion Correction and Device for Implementing the Same |
US20100053346A1 (en) * | 2008-09-03 | 2010-03-04 | Tomoo Mitsunaga | Image Processing Apparatus, Imaging Apparatus, Solid-State Imaging Device, Image Processing Method and Program |
US20100128108A1 (en) * | 2008-11-27 | 2010-05-27 | Samsung Electronics Co., Ltd. | Apparatus and method for acquiring wide dynamic range image in an image processing apparatus |
CN101888487A (en) * | 2010-06-02 | 2010-11-17 | 中国科学院深圳先进技术研究院 | High dynamic range video imaging system and image generating method |
US20140027613A1 (en) * | 2012-07-27 | 2014-01-30 | Scott T. Smith | Bayer symmetric interleaved high dynamic range image sensor |
CN103986875A (en) * | 2014-05-29 | 2014-08-13 | 宇龙计算机通信科技(深圳)有限公司 | Image acquiring device, method and terminal and video acquiring method |
CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
CN104616273A (en) * | 2015-01-26 | 2015-05-13 | 电子科技大学 | Multi-exposure image fusion method based on Laplacian pyramid decomposition |
US20150146029A1 (en) * | 2013-11-26 | 2015-05-28 | Pelican Imaging Corporation | Array Camera Configurations Incorporating Multiple Constituent Array Cameras |
CN104935911A (en) * | 2014-03-18 | 2015-09-23 | 华为技术有限公司 | Method and device for high-dynamic-range image synthesis |
CN105279746A (en) * | 2014-05-30 | 2016-01-27 | 西安电子科技大学 | Multi-exposure image integration method based on bilateral filtering |
US20160050374A1 (en) * | 2013-06-13 | 2016-02-18 | Corephotonics Ltd. | Dual aperture zoom digital camera |
CN105611187A (en) * | 2015-12-22 | 2016-05-25 | 歌尔声学股份有限公司 | Image wide dynamic compensation method and system based on double cameras |
CN106131443A (en) * | 2016-05-30 | 2016-11-16 | 南京大学 | A kind of high dynamic range video synthetic method removing ghost based on Block-matching dynamic estimation |
CN106375675A (en) * | 2016-08-30 | 2017-02-01 | 中国科学院长春光学精密机械与物理研究所 | Aerial camera multi-exposure image fusion method |
CN106530263A (en) * | 2016-10-19 | 2017-03-22 | 天津大学 | Single-exposure high-dynamic range image generation method adapted to medical image |
CN107395998A (en) * | 2017-08-24 | 2017-11-24 | 维沃移动通信有限公司 | A kind of image capturing method and mobile terminal |
-
2018
- 2018-01-17 CN CN201810045180.3A patent/CN108184075B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0982938A1 (en) * | 1998-08-28 | 2000-03-01 | Olympus Optical Co., Ltd. | Electronic camera |
US20070236595A1 (en) * | 2006-04-10 | 2007-10-11 | Sony Taiwan Limited. | Method for Improving Image Stitching Accuracy with Lens Distortion Correction and Device for Implementing the Same |
US20100053346A1 (en) * | 2008-09-03 | 2010-03-04 | Tomoo Mitsunaga | Image Processing Apparatus, Imaging Apparatus, Solid-State Imaging Device, Image Processing Method and Program |
US20100128108A1 (en) * | 2008-11-27 | 2010-05-27 | Samsung Electronics Co., Ltd. | Apparatus and method for acquiring wide dynamic range image in an image processing apparatus |
CN101888487A (en) * | 2010-06-02 | 2010-11-17 | 中国科学院深圳先进技术研究院 | High dynamic range video imaging system and image generating method |
US20140027613A1 (en) * | 2012-07-27 | 2014-01-30 | Scott T. Smith | Bayer symmetric interleaved high dynamic range image sensor |
US20160050374A1 (en) * | 2013-06-13 | 2016-02-18 | Corephotonics Ltd. | Dual aperture zoom digital camera |
US20150146029A1 (en) * | 2013-11-26 | 2015-05-28 | Pelican Imaging Corporation | Array Camera Configurations Incorporating Multiple Constituent Array Cameras |
CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
CN104935911A (en) * | 2014-03-18 | 2015-09-23 | 华为技术有限公司 | Method and device for high-dynamic-range image synthesis |
CN103986875A (en) * | 2014-05-29 | 2014-08-13 | 宇龙计算机通信科技(深圳)有限公司 | Image acquiring device, method and terminal and video acquiring method |
CN105279746A (en) * | 2014-05-30 | 2016-01-27 | 西安电子科技大学 | Multi-exposure image integration method based on bilateral filtering |
CN104616273A (en) * | 2015-01-26 | 2015-05-13 | 电子科技大学 | Multi-exposure image fusion method based on Laplacian pyramid decomposition |
CN105611187A (en) * | 2015-12-22 | 2016-05-25 | 歌尔声学股份有限公司 | Image wide dynamic compensation method and system based on double cameras |
CN106131443A (en) * | 2016-05-30 | 2016-11-16 | 南京大学 | A kind of high dynamic range video synthetic method removing ghost based on Block-matching dynamic estimation |
CN106375675A (en) * | 2016-08-30 | 2017-02-01 | 中国科学院长春光学精密机械与物理研究所 | Aerial camera multi-exposure image fusion method |
CN106530263A (en) * | 2016-10-19 | 2017-03-22 | 天津大学 | Single-exposure high-dynamic range image generation method adapted to medical image |
CN107395998A (en) * | 2017-08-24 | 2017-11-24 | 维沃移动通信有限公司 | A kind of image capturing method and mobile terminal |
Non-Patent Citations (1)
Title |
---|
A.ARDESHIR GOSHTASBY: "Fusion of multi-exposure images", 《IMAGE AND VISION COMPUTING》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259683A (en) * | 2018-11-30 | 2020-06-09 | 中光电智能云服股份有限公司 | Skin detection method and image processing apparatus |
CN110035237A (en) * | 2019-04-09 | 2019-07-19 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN110035237B (en) * | 2019-04-09 | 2021-08-31 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN111988524A (en) * | 2020-08-21 | 2020-11-24 | 广东电网有限责任公司清远供电局 | Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium |
CN112102307A (en) * | 2020-09-25 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Method and device for determining heat data of global area and storage medium |
CN112102307B (en) * | 2020-09-25 | 2023-10-20 | 杭州海康威视数字技术股份有限公司 | Method and device for determining heat data of global area and storage medium |
CN114827482A (en) * | 2021-01-28 | 2022-07-29 | 北京字节跳动网络技术有限公司 | Image brightness adjusting method and device, electronic equipment and medium |
CN114827482B (en) * | 2021-01-28 | 2023-11-03 | 抖音视界有限公司 | Image brightness adjusting method and device, electronic equipment and medium |
CN117237177A (en) * | 2023-11-15 | 2023-12-15 | 杭州海康威视数字技术股份有限公司 | Watermark processing method and device and electronic equipment |
CN117237177B (en) * | 2023-11-15 | 2024-03-19 | 杭州海康威视数字技术股份有限公司 | Watermark processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108184075B (en) | 2019-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108184075B (en) | Method and apparatus for generating image | |
Rana et al. | Deep tone mapping operator for high dynamic range images | |
US10708525B2 (en) | Systems and methods for processing low light images | |
Xu et al. | Arid: A new dataset for recognizing action in the dark | |
CN108197623A (en) | For detecting the method and apparatus of target | |
CN109325933A (en) | A kind of reproduction image-recognizing method and device | |
CN110084775A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107644209A (en) | Method for detecting human face and device | |
CN108229575A (en) | For detecting the method and apparatus of target | |
CN105427263A (en) | Method and terminal for realizing image registering | |
KR20200140713A (en) | Method and apparatus for training neural network model for enhancing image detail | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
CN110674759A (en) | Monocular face in-vivo detection method, device and equipment based on depth map | |
CN113592726A (en) | High dynamic range imaging method, device, electronic equipment and storage medium | |
CN110047122A (en) | Render method, apparatus, electronic equipment and the computer readable storage medium of image | |
CN109242794A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN104967786B (en) | Image-selecting method and device | |
CN114092678A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
WO2023217138A1 (en) | Parameter configuration method and apparatus, device, storage medium and product | |
WO2021128593A1 (en) | Facial image processing method, apparatus, and system | |
CN109492601A (en) | Face comparison method and device, computer-readable medium and electronic equipment | |
CN105959593A (en) | Exposure method for camera device and camera device | |
CN108171167A (en) | For exporting the method and apparatus of image | |
CN116506732B (en) | Image snapshot anti-shake method, device and system and computer equipment | |
CN113989387A (en) | Camera shooting parameter adjusting method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |