CN111757082A - Image processing method and system applied to AR intelligent device - Google Patents

Image processing method and system applied to AR intelligent device Download PDF

Info

Publication number
CN111757082A
CN111757082A CN202010552388.1A CN202010552388A CN111757082A CN 111757082 A CN111757082 A CN 111757082A CN 202010552388 A CN202010552388 A CN 202010552388A CN 111757082 A CN111757082 A CN 111757082A
Authority
CN
China
Prior art keywords
image
background
brightness
frame
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010552388.1A
Other languages
Chinese (zh)
Inventor
苏波
王友初
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Augmented Reality Technology Co ltd
Original Assignee
Shenzhen Augmented Reality Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Augmented Reality Technology Co ltd filed Critical Shenzhen Augmented Reality Technology Co ltd
Priority to CN202010552388.1A priority Critical patent/CN111757082A/en
Publication of CN111757082A publication Critical patent/CN111757082A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/133Equalising the characteristics of different image components, e.g. their average brightness or colour balance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals

Abstract

An image processing method applied to an AR intelligent device comprises the following steps: acquiring acquired image data of a current scene; carrying out region division on single-frame image data, wherein each frame of divided image data comprises an image subject region and an image background region; detecting each image background area to obtain an area brightness value, and carrying out average calculation on all the area brightness values to obtain a single-frame brightness value; carrying out averaging calculation on the single-frame brightness value to obtain a background brightness value of the image data; comparing the background brightness value with a preset brightness reference value, and judging the type of the environment where the current scene is located; and processing the image in the image subject area according to the environment type and then outputting and displaying the processed image. On the premise of not increasing new hardware components and cost, by background brightness detection and environment judgment, the influence of the display imaging effect on the external brightness intensity is reduced, the image display effect of the image subject area is enhanced, and the cost and performance optimization is achieved.

Description

Image processing method and system applied to AR intelligent device
Technical Field
The invention belongs to the technical field of image processing, and relates to an image processing method applied to an AR intelligent device.
Background
With the advent of wearable smart devices, the demand of AR smart glasses at consumer clients, as well as industrial clients, has grown explosively. Various applications are complex and various, the external environment and the background of the used equipment are different, higher requirements are put forward for the display effect of the AR intelligent glasses, and the existing AR intelligent glasses have the condition that the display background and the picture of a certain application are not suitable.
The existing AR smart glasses are generally equipped with an RGB camera and a display screen, and a general implementation method is to collect an image picture of a real world to an AR processing system through the RGB camera, synthesize virtual information such as texts, pictures and voices, and directly superimpose the virtual information on the image picture and output the image picture to a display device for display. Therefore, problems exist, because the brightness of the external environment is different, the background of the viewfinder and the shape, the size and the color of the image subject area are different, and after the viewfinder of the camera is formed, if the image is directly displayed, defects exist, such as unclear pictures and unclear subjects. Especially, under the bright external environment, the theme images may not be seen clearly, so that the effect of the good application is greatly reduced, and the good application is more serious or even impossible.
The existing AR intelligent glasses are mainly configured in two product forms, one is standard AR intelligent glasses, basic components comprise RGB cameras, an AR processing system and a display screen, and data of the cameras, texts, pictures and virtual data of voice are basically synthesized and then directly displayed on the display screen; another kind is at some high-end AR equipment, can increase a light intensity detection sensor for detect external environment's light intensity, according to the testing result, the depth degree of the colour of the outside protection lens of dynamic control reaches the purpose that makes the picture can see clearly.
Based on the advantages and disadvantages of the two product forms, the first AR device without a light intensity sensor and the image acquisition and display technology limit different applications in different environment backgrounds. The function is severely restricted, and the best application display effect cannot be achieved. Although the second product realizes the adaptability to the brightness of the external environment through a method for adjusting the color of the protective lens, on one hand, the cost is increased greatly due to the addition of the light intensity sensor and the hardware component capable of adjusting the color depth of the lens, the wearing weight is also increased greatly, only the color depth of the lens is adjusted, the data processing of a theme image is not carried out, and the problem of unclear theme exists in the using effect.
Disclosure of Invention
Based on the defects of the AR intelligent equipment in the prior art in application and display, the existing parts of the AR intelligent glasses are utilized, a set of cameras are adopted to collect image data, and an internal processing algorithm integrated by an AR processing system is combined to respectively process the image and the background of the AR display screen, so that the clear picture can be seen on the display screen no matter the light intensity of the display screen in the external environment or the color contrast of the subject image is clear or not.
The technical scheme of the invention is an image processing method applied to AR intelligent equipment, which comprises the following steps:
acquiring acquired image data of a current scene;
taking image data of at least one frame, and carrying out region division on single-frame image data, wherein each frame of divided image data comprises an image subject region for display and a plurality of image background regions positioned on the periphery of the image subject region;
detecting each image background area of the single-frame image data to obtain an area brightness value, and carrying out averaging calculation on all the area brightness values to obtain a single-frame brightness value;
carrying out averaging calculation on single-frame brightness values of all the single-frame image data to obtain a background brightness value of the image data;
comparing the background brightness value with a preset brightness reference value, and judging the type of the environment where the current scene is located;
and processing the image in the image subject area according to the environment type and then outputting and displaying the processed image.
According to the technical scheme, on the existing general AR intelligent equipment, on the premise that new hardware components and cost are not added, through background brightness detection and environment judgment, the influence of the brightness intensity of the outside world on the display imaging effect is reduced, the image display effect of the image subject area is enhanced, and the cost and performance are optimized.
In an example of this embodiment, the step of detecting each image background region of the single frame of image data to obtain the region brightness value includes:
and taking M pixel points by N in the region, detecting the brightness values of all the pixel points, removing a certain number of high brightness values and a certain number of low brightness values, and averaging the rest brightness values to be used as the region brightness value of each image background region.
In an example of the technical solution, the single frame image data is divided into regions according to a squared figure, the central region is an image subject region for output display, and the rest of the regions are image background regions.
In an example of the technical solution, the environment types include night, cloudy day, outdoor, sunny day, and outdoor, and a correspondence between each environment type and a preset brightness reference value is as follows: at night: 0.001-0.02 cd/m2(ii) a And (4) at night: 0.02-0.3 cd/m2(ii) a In a cloudy day: 5-50 cd/m2(ii) a Outside the cloudy day: 50-500 cd/m2(ii) a Indoor in a sunny day: 100-1000 cd/m2(ii) a Outside a sunny day: 1*108-3*109cd/m2
In an example of this technical solution, the step of "processing the image in the image subject area according to the environment type and then outputting and displaying" includes:
identifying the image of the image subject area, and judging whether an image subject exists or not;
if so, separating the image into an image subject and an image background, otherwise, taking the image as the image background;
and adjusting parameters of the image background according to the environment type, and then overlapping the image background with the image theme for output display, or replacing the image background with an intelligent background corresponding to the environment type according to the environment type, and then overlapping the image background with the image theme for output display.
In one example of the technical solution, one or more processing modes of size enlargement adjustment, color saturation adjustment, contrast adjustment and brightness adjustment are performed on the separated image theme;
and superposing the processed image theme with the adjusted image background or intelligent background to generate a synthesized image for output and display.
In an example of the technical solution, the adjusted image background or the intelligent background is simultaneously overlaid with one or more of picture data, text data and voice data in an application of the AR intelligent device to generate a synthesized image.
In an example of the technical solution, the "adjusting the parameter of the image background according to the environment type" refers to adjusting the brightness and the color of the image background according to the environment type of the current scene.
Another technical solution of the present invention is to provide an image processing system applied to an AR smart device, including:
the image acquisition unit is used for acquiring the acquired image data of the current scene;
the image dividing unit is used for taking image data of at least one frame and carrying out region division on single-frame image data, wherein each frame of divided image data comprises an image subject region for display and a plurality of image background regions positioned on the periphery of the image subject region;
the brightness detection unit is used for detecting each image background area of the single-frame image data to obtain an area brightness value, and carrying out average calculation on all the area brightness values to obtain a single-frame brightness value;
the brightness calculation unit is used for carrying out averaging calculation on single-frame brightness values of all the single-frame image data to obtain a background brightness value of the image data;
the brightness comparison unit is used for comparing the background brightness value with a preset brightness reference value and judging the type of the environment where the current scene is located;
and the image processing unit is used for processing the image in the image subject area according to the environment type and then outputting and displaying the processed image.
In an example of the technical solution, the AR smart device is an AR smart glasses, and the AR smart glasses include an RGB camera and a display module, wherein the RGB camera is used for acquiring image data of a current scene, and the display module is used for displaying an image.
Drawings
Fig. 1 is a hardware configuration block diagram of the AR smart device in the present embodiment.
Fig. 2 is a flowchart of an image processing method in the present embodiment.
Fig. 3 is a schematic diagram of region division of single-frame image data in the present embodiment.
Detailed Description
The technical solution of the present invention will now be further explained by embodiments with reference to the accompanying drawings. It should be noted that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment is an image processing method applied to an AR smart device, where the AR smart device takes AR smart glasses as an example, as shown in fig. 1, the AR smart device includes an RGB camera, a processing module, and a display module, where the RGB camera is used to acquire image data of a current scene, the display module is used to display an image, the processing module mainly includes a VPU processor, and the module executes the image processing method of the embodiment.
As shown in fig. 2, the method comprises the following steps:
step S10, acquiring the image data of the current scene;
wherein, AR intelligence glasses start work back, can be through cell-phone and APP or speech control AR intelligence eye, opening under intelligent background mode process, the RGB camera begins work, gathers the image data of current scene, and processing module can acquire image data through calling or receiving to subsequent processing. 1300W pixels of image data, which is in the form of video or pictures, can be taken by the RGB camera.
Step S20, taking at least one frame of image data, and dividing the single frame of image data into areas, wherein each divided frame of image data comprises an image subject area for display and a plurality of image background areas positioned at the periphery of the image subject area;
as an example, in the acquired image data, 10 frames of image data are taken, each frame of image data is divided into areas, the areas are divided by squared squares, the area in the center is an image subject area for output display, and the rest eight areas are image background areas. The size of the image subject area is adjustable and can be set and defined in advance through a setting control menu, as shown in fig. 3.
Step S30, detecting each image background area of the single-frame image data to obtain an area brightness value, and carrying out average calculation on all the area brightness values to obtain a single-frame brightness value;
based on the division of the nine-square grid, except for the image subject area, the brightness values of the rest eight image background areas are detected, and the specific evaluation method is that each area firstly detects the brightness value of the frame, 100 x 100 is taken to be 10000 pixel points according to the horizontal and vertical value directions, 20% of the pixel points with high brightness values and 20% of the pixel points with low brightness values are removed, the brightness values of the rest 600 pixel points, namely the rest 60% of the brightness values are averaged, and the area brightness value n is equal to (n is equal to)1+n2+..n600) And/600, wherein n represents the brightness value of the pixel point, the obtained average value is the region brightness value, and the brightness value of each image background region is obtained respectively. And then, solving the root mean square value of the brightness values of the eight areas to obtain a single-frame brightness value of the background of the single-frame image data.
Step S40, carrying out average calculation on the single-frame brightness values of all the single-frame image data to obtain the background brightness value of the image data;
and processing all continuously acquired 10 frames of image data by the same steps, and then performing averaging processing to obtain the background brightness value of the current scene.
S50, comparing the background brightness value with a preset brightness reference value, and judging the environment type of the current scene;
comparing the obtained background brightness value with a preset brightness reference value in a scene brightness database, and judging the environment type of the current scene, wherein the environment types include night, cloudy day indoor, cloudy day outdoor, sunny day indoor and sunny day outdoor, and the corresponding relationship between each environment type and the preset brightness reference value is as follows: at night: 0.001-0.02 cd/m2(ii) a And (4) at night: 0.02-0.3 cd/m2(ii) a In a cloudy day: 5-50 cd/m2(ii) a Outside the cloudy day: 50-500 cd/m2(ii) a Indoor in a sunny day: 100-1000 cd/m2(ii) a Outside a sunny day: 1*108-3*109cd/m2
S60, processing the image in the image subject area according to the environment type, and then outputting and displaying the processed image;
specifically, the divided most central image subject area is used for being output and displayed on a display module of the AR smart glasses for a user to watch.
Firstly, the image in the image subject area is processed by an algorithm to identify an object. The object identification process is that the image in the image subject area is processed in gray scale to become a black and white image, then the image is divided, each small area is subjected to the binarization method to detect the edge and corner as the image detection, and the CNN network algorithm is used to judge what object the whole image is, if the whole image is not ground or grass, wall and the like, the scene which is possibly the image background is predefined, that is, the image subject is judged to have the needed image subject, the image subject refers to the needed part in the image, for example, when a photo of a person is taken, the person is the image subject, and the rest is the image background.
If the image in the image subject area is judged to have no image subject, the image is considered to be an image with a pure image background, for example, an image shot facing a clean wall, at the moment, after the environment type of the current scene is obtained, the result of the environment type is stored, and the intelligent background is used in cooperation with the current APP application.
When the image theme exists in the image theme area, the image is required to be separated into an image theme and an image background, and one or more processing modes of size enlargement adjustment, color saturation adjustment, contrast adjustment and brightness adjustment are carried out on the separated image theme; the display can be very clear by matching with proper image background colors, and the best experience effect is achieved by combining the application function of the AR intelligent equipment. The image theme can also be determined whether to be displayed at the screen end according to the calling of the application APP.
And adjusting the brightness and color of the separated image background according to the environment type of the current scene, and generating a synthesized image for output and display by the processed image theme and the adjusted image background. In addition, the image data and one or more of picture data, text data and voice data in the application of the AR intelligent device can be overlaid together, and a synthetic image is generated and displayed together. The image background can be replaced by an intelligent background which is prestored in the application APP and corresponds to the environment type, and the intelligent background and the image theme are overlapped and then output and displayed.
In this embodiment, after the environment type of the current scene is obtained, the environment type may be further output to a display controller for controlling a display screen as an input condition. According to the judgment basis of the environment type, the display screen outputs different display brightness, the corresponding relation as shown in the following table is specifically set,
type of environment Night Night of the moon In the cloudy sky Outside cloudy sky Indoor in sunny day Outdoors in sunny days
Display brightness 20% 40% 50% 60% 70% 80
Because the brightness of the image background is different, and the display brightness projected on the optical display system is different, the image theme effect of the image is more prominent, and the method is realized on the premise of not adding an additional light intensity inspection sensor and an adjustable brightness lens on the basis of the hardware of the conventional AR intelligent glasses. Hardware cost, hardware design weight and complexity are saved, and an intelligent ambient light intensity discrimination technology enables the adaptability of the use scene of the AR intelligent glasses to be stronger, and the enhanced image theme area is displayed, so that the displayed image theme is clearer and more definite.
Although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention.

Claims (10)

1. An image processing method applied to an AR intelligent device is characterized by comprising the following steps:
acquiring acquired image data of a current scene;
taking image data of at least one frame, and carrying out region division on single-frame image data, wherein each frame of divided image data comprises an image subject region for display and a plurality of image background regions positioned on the periphery of the image subject region;
detecting each image background area of the single-frame image data to obtain an area brightness value, and carrying out averaging calculation on all the area brightness values to obtain a single-frame brightness value;
carrying out averaging calculation on single-frame brightness values of all the single-frame image data to obtain a background brightness value of the image data;
comparing the background brightness value with a preset brightness reference value, and judging the type of the environment where the current scene is located;
and processing the image in the image subject area according to the environment type and then outputting and displaying the processed image.
2. The image processing method applied to the AR smart device according to claim 1,
the step of detecting each image background area of the single frame image data to obtain the area brightness value comprises the following steps:
and taking M pixel points by N in the region, detecting the brightness values of all the pixel points, removing a certain number of high brightness values and a certain number of low brightness values, and averaging the rest brightness values to be used as the region brightness value of each image background region.
3. The image processing method applied to the AR smart device according to claim 1,
the single-frame image data is divided into areas according to a nine-square grid, the central area is an image subject area for output display, and the rest areas are image background areas.
4. The image processing method applied to the AR smart device according to claim 1,
the environment typeIncluding dark night, moon night, cloudy day indoor, cloudy day outdoor, sunny day indoor and sunny day outdoor, the corresponding relation of every environment type and preset luminance reference value has: at night: 0.001-0.02 cd/m2(ii) a And (4) at night: 0.02-0.3 cd/m2(ii) a In a cloudy day: 5-50 cd/m2(ii) a Outside the cloudy day: 50-500 cd/m2(ii) a Indoor in a sunny day: 100-1000 cd/m2(ii) a Outside a sunny day: 1*108-3*109cd/m2
5. The image processing method applied to the AR smart device according to claim 1,
the step of processing the image in the image subject area according to the environment type and then outputting and displaying the image comprises the following steps:
identifying the image of the image subject area, and judging whether an image subject exists or not;
if so, separating the image into an image subject and an image background, otherwise, taking the image as the image background;
and adjusting parameters of the image background according to the environment type, and then overlapping the image background with the image theme for output display, or replacing the image background with an intelligent background corresponding to the environment type according to the environment type, and then overlapping the image background with the image theme for output display.
6. The image processing method applied to the AR intelligent device according to claim 5, further comprising:
one or more processing modes of size enlargement adjustment, color saturation adjustment, contrast adjustment and brightness adjustment are carried out on the separated image theme;
and superposing the processed image theme with the adjusted image background or intelligent background to generate a synthesized image for output and display.
7. The image processing method applied to the AR intelligent device according to claim 5, further comprising:
and simultaneously superposing the adjusted image background or the intelligent background with one or more of picture data, text data and voice data in the application of the AR intelligent device to generate a synthetic image.
8. The image processing method applied to the AR intelligent device according to claim 5, wherein the "adjusting the parameter of the image background according to the environment type" means adjusting the brightness and the color of the image background according to the environment type of the current scene.
9. An image processing system applied to an AR intelligent device is characterized by comprising:
the image acquisition unit is used for acquiring the acquired image data of the current scene;
the image dividing unit is used for taking image data of at least one frame and carrying out region division on single-frame image data, wherein each frame of divided image data comprises an image subject region for display and a plurality of image background regions positioned on the periphery of the image subject region;
the brightness detection unit is used for detecting each image background area of the single-frame image data to obtain an area brightness value, and carrying out average calculation on all the area brightness values to obtain a single-frame brightness value;
the brightness calculation unit is used for carrying out averaging calculation on single-frame brightness values of all the single-frame image data to obtain a background brightness value of the image data;
the brightness comparison unit is used for comparing the background brightness value with a preset brightness reference value and judging the type of the environment where the current scene is located;
and the image processing unit is used for processing the image in the image subject area according to the environment type and then outputting and displaying the processed image.
10. The image processing system as claimed in claim applied to the AR smart device, wherein the AR smart device is an AR smart glasses, the AR smart glasses include RGB cameras and a display module, the RGB cameras are used for collecting image data of a current scene, and the display module is used for displaying an image.
CN202010552388.1A 2020-06-17 2020-06-17 Image processing method and system applied to AR intelligent device Pending CN111757082A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010552388.1A CN111757082A (en) 2020-06-17 2020-06-17 Image processing method and system applied to AR intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010552388.1A CN111757082A (en) 2020-06-17 2020-06-17 Image processing method and system applied to AR intelligent device

Publications (1)

Publication Number Publication Date
CN111757082A true CN111757082A (en) 2020-10-09

Family

ID=72676265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010552388.1A Pending CN111757082A (en) 2020-06-17 2020-06-17 Image processing method and system applied to AR intelligent device

Country Status (1)

Country Link
CN (1) CN111757082A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112213860A (en) * 2020-11-17 2021-01-12 闪耀现实(无锡)科技有限公司 Augmented reality device, wearable augmented reality equipment and method for controlling augmented reality device
CN112884909A (en) * 2021-02-23 2021-06-01 浙江商汤科技开发有限公司 AR special effect display method and device, computer equipment and storage medium
CN113450289A (en) * 2021-08-31 2021-09-28 中运科技股份有限公司 Method for automatically enhancing low illumination of face image in passenger traffic scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516613A (en) * 2015-12-07 2016-04-20 凌云光技术集团有限责任公司 Intelligent exposure method and system based on face recognition
CN106713778A (en) * 2016-12-28 2017-05-24 上海兴芯微电子科技有限公司 Exposure control method and device
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516613A (en) * 2015-12-07 2016-04-20 凌云光技术集团有限责任公司 Intelligent exposure method and system based on face recognition
CN106713778A (en) * 2016-12-28 2017-05-24 上海兴芯微电子科技有限公司 Exposure control method and device
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112213860A (en) * 2020-11-17 2021-01-12 闪耀现实(无锡)科技有限公司 Augmented reality device, wearable augmented reality equipment and method for controlling augmented reality device
CN112213860B (en) * 2020-11-17 2023-04-18 闪耀现实(无锡)科技有限公司 Augmented reality device, wearable augmented reality equipment and method for controlling augmented reality device
CN112884909A (en) * 2021-02-23 2021-06-01 浙江商汤科技开发有限公司 AR special effect display method and device, computer equipment and storage medium
CN113450289A (en) * 2021-08-31 2021-09-28 中运科技股份有限公司 Method for automatically enhancing low illumination of face image in passenger traffic scene
CN113450289B (en) * 2021-08-31 2021-12-10 中运科技股份有限公司 Method for automatically enhancing low illumination of face image in passenger traffic scene

Similar Documents

Publication Publication Date Title
EP0932114B1 (en) A method of and apparatus for detecting a face-like region
CN111757082A (en) Image processing method and system applied to AR intelligent device
CN110378859B (en) Novel high dynamic range image generation method
US20170316297A1 (en) Translucent mark, method for synthesis and detection of translucent mark, transparent mark, and method for synthesis and detection of transparent mark
US7830418B2 (en) Perceptually-derived red-eye correction
US20160110846A1 (en) Automatic display image enhancement based on user's visual perception model
CN106981054B (en) Image processing method and electronic equipment
CN111586273B (en) Electronic device and image acquisition method
CN106550227B (en) A kind of image saturation method of adjustment and device
CN107968919A (en) Method and apparatus for inverse tone mapping (ITM)
CN107682611B (en) Focusing method and device, computer readable storage medium and electronic equipment
JPH0844874A (en) Image change detector
WO2013114803A1 (en) Image processing device, image processing method therefor, computer program, and image processing system
CN105513566A (en) Image adjusting method of executing optimal adjustment according to different environments and displayer
KR100350789B1 (en) Method of raw color adjustment and atmosphere color auto extract in a image reference system
CN106878606B (en) Image generation method based on electronic equipment and electronic equipment
CN109300186B (en) Image processing method and device, storage medium and electronic equipment
EP4090006A2 (en) Image signal processing based on virtual superimposition
WO2022246945A1 (en) Information display method and apparatus, ar device and storage medium
CN115602099A (en) Display screen display adjusting method and system, computer equipment and storage medium
CN111192227B (en) Fusion processing method for overlapped pictures
CN111915528A (en) Image brightening method and device, mobile terminal and storage medium
WO2013114802A1 (en) Image processing device, image processing method therefor, computer program, and image processing system
CN110673720A (en) Eye protection display method and learning machine with eye protection mode
US11688046B2 (en) Selective image signal processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201009