CN112839181B - Method and equipment for generating high dynamic range image - Google Patents

Method and equipment for generating high dynamic range image Download PDF

Info

Publication number
CN112839181B
CN112839181B CN202011621503.2A CN202011621503A CN112839181B CN 112839181 B CN112839181 B CN 112839181B CN 202011621503 A CN202011621503 A CN 202011621503A CN 112839181 B CN112839181 B CN 112839181B
Authority
CN
China
Prior art keywords
image information
vector
class
target
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011621503.2A
Other languages
Chinese (zh)
Other versions
CN112839181A (en
Inventor
陈文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202011621503.2A priority Critical patent/CN112839181B/en
Publication of CN112839181A publication Critical patent/CN112839181A/en
Application granted granted Critical
Publication of CN112839181B publication Critical patent/CN112839181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures

Abstract

An object of the present application is to provide a method and apparatus for generating a high dynamic range image, the method comprising: acquiring image information to be processed, wherein the image information comprises one or more first objects, and the one or more first objects belong to at least one first object category; generating a first target vector of image information according to at least one first object category; inputting the first target vectors into a vector regression model to output second target vectors of the image information, wherein the second target vectors comprise target exposure of first object regions corresponding to each first object category in at least one first object category in the image information; a high dynamic range image of the image information is synthesized based on the target exposure of the first object region corresponding in the image information for each first object category. The high dynamic range image of the image information is generated based on the image characteristics of the image information, so that the finally generated high dynamic range image is more real, and the image effect is improved.

Description

Method and equipment for generating high dynamic range image
Technical Field
The present application relates to the field of image processing, and more particularly, to a technique for generating a high dynamic range image.
Background
High Dynamic Range Imaging (High Dynamic Range Imaging) is a technique used to achieve a larger Dynamic Range of exposure (i.e., a larger difference in brightness) than conventional digital image techniques. The method can prevent the bright scenery from being overexposed and prevent the dark scenery from being underexposed. For example, people can be shot in a backlight environment, so that the people and the environment can be shot clearly, and the whole photo is not too dark or too bright.
Disclosure of Invention
It is an object of the present application to provide a method and apparatus for generating a high dynamic range image.
According to an aspect of the present application, there is provided a method for generating a high dynamic range image, the method comprising:
acquiring image information to be processed, wherein the image information comprises one or more first objects, and the one or more first objects belong to at least one first object category;
generating a first target vector of the image information according to the at least one first object category;
inputting the first target vector into a vector regression model to output a second target vector of the image information, wherein the second target vector comprises target exposure of a first object region corresponding to each of the at least one first object category in the image information;
synthesizing a high dynamic range image of the image information according to the target exposure of the first object region corresponding to each first object category in the image information.
According to an aspect of the present application, there is provided an apparatus for generating a high dynamic range image, the apparatus comprising:
the system comprises a module, a processing module and a processing module, wherein the module is used for acquiring image information to be processed, the image information comprises one or more first objects, and the one or more first objects belong to at least one first object category;
a second module for generating a first target vector of the image information according to the at least one first object class;
a third module, configured to input the first target vector into a vector regression model to output a second target vector of the image information, where the second target vector includes a target exposure of a first object region corresponding to each of the at least one first object class in the image information;
and a fourth module for synthesizing a high dynamic range image of the image information based on the target exposure of the corresponding first object region in the image information for each first object class.
According to an aspect of the present application, there is provided an apparatus for generating a high dynamic range image, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
According to one aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods described above.
According to an aspect of the application, a computer program product is provided, comprising a computer program which, when executed by a processor, carries out the steps of any of the methods as described above.
Compared with the prior art, the method and the device have the advantages that the first target vector of the image information is generated according to at least one first object category to which one or more first objects appearing in the image information belong, and the second target vector of the image information is obtained by inputting the first target vector into a vector regression model. And obtaining the target exposure of the first object region corresponding to each first object category in the image information based on the output second target vector. Thereby synthesizing a high dynamic range image of the image information based on the target exposure of each first object region. The target exposure of each first object area is obtained based on the image characteristics of the image information (for example, the first object appearing in the image information and the first object category), and the high dynamic range image of the image information is synthesized based on the target exposure corresponding to each first object area, so that the synthesized high dynamic range image is more real and has better effect.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method for generating a high dynamic range image according to one embodiment of the present application;
FIG. 2 illustrates a block diagram of an apparatus for generating a high dynamic range image according to one embodiment of the present application;
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (PRAM), static Random-Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technology, compact Disc Read Only Memory (CD-ROM), digital Versatile Disc (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in the present application includes, but is not limited to, a terminal, a network device, or a device formed by integrating a terminal and a network device through a network. The terminal includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the terminal, the network device, or a device formed by integrating the terminal and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Here, an execution subject of the method for generating a high dynamic range image described in the present application includes, but is not limited to, a network device, a user device including a camera. Preferably, the method of the present application is explained below from the perspective of the user equipment. For convenience of explanation, the following will be collectively referred to as "an apparatus" or "an apparatus for generating a high dynamic range image".
In some embodiments, the user device includes, but is not limited to, a computing device such as a cell phone, computer, tablet, and the like. For example, when the execution subject is the user equipment, the user equipment acquires the image information to be photographed through a photographing device, and synthesizes a high dynamic range image of the image information based on the method described in the present application. For another example, when the execution subject is the network device, the user device may send the acquired image information to be processed to the network device, and the network device synthesizes a high dynamic range image of the image information based on the method described in the present application.
Fig. 1 shows a flowchart of a method for generating a high dynamic range image according to an aspect of the present application, the method comprising step S11, step S12, step S13 and step S14. In step S11, the device acquires image information to be processed, wherein the image information includes one or more first objects belonging to at least one first object category; in step S12, the device generates a first target vector of the image information according to the at least one first object class; in step S13, the apparatus inputs the first target vector into a vector regression model to output a second target vector of the image information, wherein the second target vector includes a target exposure of a first object region corresponding to each of the at least one first object class in the image information; in step S14, the apparatus synthesizes a high dynamic range image of the image information according to the target exposure of the first object region corresponding in the image information for each first object category.
Specifically, in step S11, the apparatus acquires image information to be processed, wherein the image information includes one or more first objects belonging to at least one first object category. In some embodiments, the image information includes, but is not limited to, image information captured by a viewfinder of the user device. For example, when a user device (e.g., a mobile phone) performs framing through a viewfinder, the image information may be acquired, so that the device may analyze and process the image information. In some embodiments, the first object includes, but is not limited to, an item (e.g., a cup, a book, a computer, the sky, etc.) appearing in the image information. In some embodiments, first objects appearing in the image information are classified to determine at least one first object class to which the one or more first objects correspond. For example, a first object: the image information A comprises a kitten, a book, a computer and a sky, wherein the kitten belongs to animals, the book and the computer belong to office supplies, the sky belongs to other categories, and one or more first objects in the image information A belong to three first object categories of animals, office supplies and the like.
In step S12, the device generates a first target vector of the image information according to the at least one first object class. In some embodiments, the first target vector is generated based on the at least one first object class. In some embodiments, the first target vector comprises a plurality of first components, e.g., the first target vector is [0,1,0,1,0,0,1], where data such as "0," "1," etc. in the first target vector is the first component of the first target vector. For example, the image information a includes a first object: the cat, the desk, the computer, and the sky may be categorized, and the first object in the image information a may be obtained to belong to at least one first object category (e.g., animal, office supply, and other categories). The apparatus generates a corresponding first target vector (e.g., [0,1,0,1,0,0,1 ]) based on three first object categories of "animal, office supplies, other classes" to vectorize the image information a.
In step S13, the apparatus inputs the first target vector into a vector regression model to output a second target vector of the image information, wherein the second target vector includes a target exposure of a first object region corresponding to each of the at least one first object class in the image information. In some embodiments, the vector regression model is for outputting a corresponding second target vector based on the input first target vector. For example, the device may be configured to vectorize the image information based on the at least one first object category, generate the first target vector, and input the first target vector into the vector regression model to obtain a second target vector corresponding to the image information. In some embodiments, the second target vector comprises a plurality of second components, e.g., the second target vector is [0,100,0, 80,0,0,90], data in the second vector such as "80," "100," "90," "0" as the second component of the second target vector. In some embodiments, the second target vector output by the vector regression model includes a target exposure level of a first object region corresponding to each of the at least one first object class in the image information. For example, the at least one first object category to which the first object in the image information a belongs includes: the second target vector obtained from animals, office supplies and other types is [0,100,0, 80,0,0,90], the second component "100" in the second target vector is used as the target exposure of the first target area corresponding to the first object type "animal", the second component "80" is used as the target exposure of the first target area corresponding to the first object type "office supplies", and the second component "90" is used as the target exposure of the first target area corresponding to the first object type "other types". In some embodiments, the device determines, based on the arrangement order of the second components in the second target vector, a first object class corresponding to each second component, so as to determine a target exposure of a first object region corresponding to each first object class in the image information. In some embodiments, the first object region corresponding to the first object category in the image information includes a region where one or more objects belonging to the first object category are located, for example, in the image information a, the first object category office supplies include desks and computers, and the first object region corresponding to the first object category "office supplies" in the image information a includes a region corresponding to desks and computers. In some embodiments, the apparatus divides the image information into one or more regions based on a YOLO algorithm, and detects a first object corresponding to each region to classify the first object, and determines a first object region corresponding to each first object category (for example, a region in which a first object belonging to the same first object category is located is divided into the first object region corresponding to the first object category). In some embodiments, the image information may also be segmented into one or more regions based on image segmentation techniques (e.g., image segmentation algorithms such as resNet, VGGNet, fast, R-CNN, etc.), and then first objects corresponding to each region are identified (e.g., identified based on image identification techniques) to classify the first objects and determine first object regions corresponding to each first object class. Of course, those skilled in the art will appreciate that the above specific operations for identifying and segmenting the image information are merely examples, and other specific operations now or later that may occur are also within the scope of the present application and are incorporated by reference herein. In some embodiments, the exposure level of the object region is calculated based on pixel information within the object region. For example, after graying the target region, the average value of all pixel information in the target region is calculated to obtain the exposure level of the target region. In some embodiments, the exposure effect is the best when the exposure level of the first object region is the exposure level of the corresponding object.
In step S14, the apparatus synthesizes a high dynamic range image of the image information according to the target exposure of the first object region corresponding in the image information for each first object category. For example, after the target exposure of each first object region is obtained, the high dynamic range image of the image information is synthesized based on the target exposure, and the image characteristics of the image information are fully considered, so that the finally obtained high dynamic range image is more real and has better effect.
In some embodiments, the step S11 includes: the equipment acquires image information to be processed; determining one or more first objects appearing in the image information; determining at least one first object class to which the one or more first objects belong according to the first object class to which each of the one or more first objects belongs. In some embodiments, the user device obtains the image information in response to a framing operation by a user. In some embodiments, the device identifies a first object appearing in the image information based on an image recognition technique. In some embodiments, the apparatus may also detect the first object appearing in the image information based on a YOLO algorithm. In some embodiments, the device determines a first object class to which each first object belongs to determine the at least one first object class. For example, the image information a includes a first object: the image information comprises a kitten, an office desk, a computer and a sky, wherein a first object, the kitten belongs to an animal, the first object office desk belongs to office supplies, a first object computer belongs to office supplies, and the first object sky belongs to other categories, so that the first object in the image information is determined to belong to three first object categories of 'animal, office supplies and other categories'. In some embodiments, a mapping relationship between each of the plurality of object categories and its corresponding object is established in the device, so as to determine, based on the determined object, an object category to which the object belongs. In some embodiments, the first object that failed to be identified is classified into other categories.
In some embodiments, the step S12 includes: the device determines an assignment of each first component in a first initial vector according to the at least one first object class and a class set to generate a first target vector of the image information, wherein the first initial vector corresponds to the class set. In some embodiments, the first target vector comprises a plurality of first components, and the device generates the first target vector of the image information by assigning values to the first initial vector. In some embodiments, the device determines an assignment of a corresponding first component of the first initial vector based on the determined first object class and the set of classes in the image information to generate a first target vector of the image information. For example, the reassigned first initial vector is used as a first target vector of the image information. In some embodiments, the first initial vector corresponds to the set of categories such that the first initial vector is assigned a value based on the set of categories.
In some embodiments, the set of classes includes a plurality of sequentially arranged second object classes, the first initial vector includes a plurality of first components, a number of the plurality of second object classes is equal to a number of the plurality of first components, each of the plurality of second object classes has its corresponding first component in the first initial vector based on an order of arrangement of the plurality of second object classes, and an initial assignment of each first component is zero; the step S12 includes: and if a second object class which is the same as the first object class exists in the class set, re-assigning a first component of the second object class corresponding to the first initial vector according to target assignment to generate a first target vector of the image information. Herein, the terms "first", "second", "third", etc. mentioned in this application are only used for distinguishing information in different objects (e.g., image information, pictures, category sets) and do not represent any order. In some embodiments, the category set includes a plurality of second object categories arranged in order, for example, category set B includes, in order: people, animals, food, office supplies, school supplies, vehicles, other categories (objects that fail identification or determination may be noted as other and categorized as other categories) and a second object category. Of course, those skilled in the art will appreciate that the above-described class sets are merely exemplary, and that other existing or future class sets that may be available for use in the present application are also within the scope of the present application and are hereby incorporated by reference. The class set B corresponds to a first initial vector B, for example, the first initial vector B is [0,0,0,0,0,0,0], where the number of second object classes in the class set B is equal to the number of first components in the first initial vector B, and is seven. Based on the order of the seven second object categories, each second object category has its corresponding first component in the first initial vector, e.g., a person corresponds to the first component in the first initial vector B, an animal corresponds to the second first component in the first initial vector B, a food corresponds to the third first component in the first initial vector B, and so on. In some embodiments, the target assignment includes, but is not limited to, a fixed value such as 1. For example, if there exists a second object class identical to the first object class in the class set, the corresponding first component of the second object class in the first initial vector is reassigned to 1. For example, category set B includes a second object category in order: the image information processing method includes the steps that people, animals, food, office supplies, school supplies, vehicles and other categories, a first initial vector B is [0,0,0,0,0,0,0], the first object categories appearing in the image information include animals, office supplies and other categories, the animals, the office supplies and the other categories exist in the category set and are the same as the first object categories, the first components corresponding to the animals, the office supplies and the other categories in the first initial vector B are reassigned to be 1, and then a first target vector [0,1,0,1,0,0,1] of the image information is generated.
In some embodiments, the step S12 includes: and the equipment sequentially detects whether a first object class identical to the second object class exists in the one or more first object classes according to the arrangement sequence of the plurality of second object classes in the class set, and if so, re-assigns the first component of the second object class corresponding to the first initial vector according to the target assigned value to generate a first target vector of the image information. For example, the device sequentially detects whether a first object class identical to the second object class exists in the first object class according to the order of the second object class, and if so, reassigns the first component corresponding to the second object class. For example, category set B includes a second object category in order: the image information comprises characters, animals, food, office supplies, school supplies, vehicles and other categories, the first initial vector B is [0,0,0,0,0,0,0], and the first object categories appearing in the image information comprise animals, office supplies and other categories. The device firstly detects whether a person exists in the first object category or not based on the arrangement sequence of the plurality of second object categories, if the result is non-existent, the device does not need to re-assign the first component corresponding to the person, the first component corresponding to the person still is an initial assignment (for example, 0), then detects whether an animal exists in the first object category or not, if the result is existent, the device re-assigns the first component corresponding to the animal, for example, the first component corresponding to the animal is assigned as the target assignment (for example, 1), then detects whether food exists in the first object category or not, if the result is non-existent, the first component corresponding to the food still is an initial assignment (for example, 0), and so on, and after the second object categories are detected in sequence, the first target vector of the image information can be generated.
In some embodiments, the second target vector includes a number of second components equal to a number of the second object categories, and each of the second object categories has its corresponding second component in the second target vector based on an arrangement order of the second object categories, and the method further includes step S15 (not shown), in which, for each of at least one first object category in the image information, the device assigns, as the target exposure level of the first object region corresponding to the first object category in the image information, a value of the corresponding second component of the second object category that is the same as the first object category in the second target vector. This embodiment specifically describes how to obtain the corresponding first object region of each first object class in the image information based on the obtained second target vector. In some embodiments, the second target vector and the class set are also in a corresponding relationship, so as to determine a first object class corresponding to each second component according to an arrangement order of each second component in the second target vector, and thus use the second component as a target exposure of a first object region corresponding to the first object class. For example, the second target vector includes a number of second components equal to a number of second object classes in the class set, and the second object classes all have their corresponding second components in the second target component based on an order of arrangement of the second object classes. For example, category set B includes a second object category in order: the second target vector output based on the vector regression model is [0,100,0, 80,0,0,90], based on the arrangement sequence of the plurality of second object categories, the second component corresponding to the person is 0, the first object category without the person in the image information is described, the second component corresponding to the animal is 100, the first object category with the animal as the first object category is the target exposure of the first object region corresponding to the animal in the image information, the second component corresponding to the food is 0, the first object category without the food in the image information is described, and the like, so as to determine the target exposure of the first object region corresponding to each first object category in the image information according to the obtained second target vector.
In some embodiments, the method further comprises a step S16 (not shown), in which step S16 the device constructs the vector regression model from the first vector and the second vector of the plurality of pictures. In some embodiments, the vector regression model is trained using an NFM network. And training through the first vector and the second vector of a large number of pictures to obtain the vector regression model. Thus, by inputting a first target vector to the vector regression model, a corresponding second target vector may be output. In some embodiments, the multiple pictures used for training the vector regression model are pictures with better exposure effect, so that the effect is better when the synthesis of the high dynamic range image is performed according to the exposure of the target included in the output second target vector.
In some embodiments, the method further includes step S17 (not shown) and step S18, in step S17, for each of the plurality of pictures, generating a first vector of the picture according to at least one third object category to which one or more third objects appearing in the picture belong; in step S18, the device generates a second vector for the picture according to the exposure level of the third object region corresponding to each of the at least one third object class in the picture. In some embodiments, a third object (e.g., sky, table, kitten, etc.) appearing in each picture collected is determined, for example, by image recognition techniques, or a YOLO algorithm to detect the third object in each picture. Generate a first vector for each picture based on the set of categories, e.g., picture C includes a third object: the cat, the desk, the computer and the sky can be classified to obtain at least one third object category (for example, animals, office supplies and other categories) to which the third object in the picture C belongs. A first vector (e.g., [0,1,0,1,0,0,1 ]) corresponding to the picture C is generated based on three third object categories of "animal, office supplies, and other classes" to vectorize the picture C. Further, a second vector of the picture is generated according to the exposure level of the third object region corresponding to each third object category in the picture. For example, a picture is divided into one or more regions based on a YOLO algorithm, a third object corresponding to each region is detected, the third objects are classified, and a first object region corresponding to each third object category is determined (for example, a region where the third objects belonging to the same third object category are located is a third object region corresponding to the third object category). In some embodiments, the picture may also be segmented into one or more regions based on an image segmentation technique (e.g., an image segmentation algorithm such as resNet, VGGNet, fast, R-CNN, etc.), and then a third object corresponding to each region is identified (e.g., based on an image identification technique), so as to classify the third object and determine a third object region corresponding to each third object class. Of course, those skilled in the art will appreciate that the above-described specific operations for identifying and segmenting the images are merely examples, and that other specific operations now or later that may occur, such as those applicable to the present application, are also within the scope of the present application and are incorporated herein by reference. The exposure level of each third object region is then calculated, and a second vector of the picture is generated based on the exposure level of each third object region. Thereby obtaining a first vector and a second vector for each picture.
In some embodiments, the step S17 includes: for each picture in the plurality of pictures, determining an assignment of each first component in a first initial vector according to at least one third object category to which one or more third objects appearing in the picture belong and a category set to generate a first vector of the picture, wherein the first initial vector corresponds to the category set. In some embodiments, based on the same category set as in the actual application (e.g., the category set in determining the first target vector described above), the assignment of each first component in the first initial vector corresponding to the category set is determined to generate the first vector for each picture. For example, the reassigned first initial vector is used as the first vector of the picture. In some embodiments, the first initial vector corresponds to the set of categories such that the first initial vector is assigned a value based on the set of categories.
In some embodiments, the set of classes includes a plurality of second object classes arranged in sequence, the first initial vector includes a plurality of first components, the number of the plurality of second object classes is equal to the number of the plurality of first components, such that each second object class has its corresponding first component in the first initial vector, and the initial assignment of each first component is zero; the step S17 includes: if a second object class identical to the third object class exists in the class set, the device reassigns a first component of the second object class corresponding to the first initial vector according to a target assignment to generate a first vector of the picture. In some embodiments, the category set includes a plurality of second object categories arranged in order, for example, category set B includes, in order: people, animals, food, office supplies, school supplies, vehicles, other classes of second object classes. Of course, those skilled in the art will appreciate that the above described class sets are merely exemplary, and that other existing or future possible class sets, as may be suitable for use in the present application, are within the scope of the present application and are hereby incorporated by reference. The class set B corresponds to a first initial vector B, for example, the first initial vector B is [0,0,0,0,0,0,0], where the number of second object classes in the class set B is equal to the number of first components in the first initial vector B, and the number of second object classes is 7. Based on the ranking order of the 7 second object categories, each second object category has its corresponding first component in the first initial vector B, e.g., a person corresponds to the first component in the first initial vector B, an animal corresponds to the second first component in the first initial vector B, a food corresponds to the third first component in the first initial vector B, and so on. In some embodiments, the target assignment includes, but is not limited to, a fixed value such as 1. For example, if a second object class identical to the third object class in the picture exists in the class set, the first component corresponding to the second object class in the first initial vector is reassigned to be a fixed value 1. For example, category set B includes a second object category in order: characters, animals, food, office supplies, school supplies, vehicles and other categories, wherein the first initial vector B is [0,0,0,0,0,0,0], a third object appearing in a picture includes an orange, a banana, an office table and others (for example, an unidentified article can be marked by other labels), then the food, the office supplies and other categories (for example, the unidentified articles can be marked by other labels) exist in the category set and are the same as the category of the third object corresponding to the third object, and the first vector of the first initial vector B corresponding to the food, the office supplies and other categories is reassigned to 1, so that the first vector of the picture is generated to be [0,0,1,1,0,0,1]. In some embodiments, the specific process of generating the first vector of the picture comprises: and the equipment sequentially detects whether a first object class identical to the second object class exists in one or more third object classes in the picture according to the arrangement sequence of the plurality of second object classes in the class set, and if so, re-assigns the first component of the second object class corresponding to the first initial vector according to the target assigned value to generate a first target vector of the image information.
In some embodiments, the step S18 includes a step S181 (not shown), a step S182, and a step S183. In step S181, the device determines a third object region corresponding to each of at least three third object categories in the picture; in step S182, the apparatus calculates the exposure level of each third object region to obtain the exposure level of the third object region corresponding to each third object category in the at least one third object category in the picture; in step S183, the apparatus determines, according to the exposure level of the third object region corresponding to each of the at least one third object class in the picture and the class set, the assignment of each second component in a second initial vector to generate a second vector of the picture, where the second initial vector corresponds to the class set. In some embodiments, for each picture, it is necessary to determine the third object region of each third object category in the picture, then calculate the exposure level of each third object region, and then generate the second vector of the picture according to the exposure level of each third object region. In some embodiments, when generating the second vector, it is also necessary to determine, based on the class set, a corresponding second component of the exposure level of each third object region in the second initial vector, so as to assign a value to the second component according to the exposure level of the third object region to generate the second vector of the picture.
In some embodiments, the step S181 includes: the equipment determines one or more third objects appearing in the picture and a third object sub-region corresponding to each third object; and taking a third object sub-region corresponding to a third object belonging to the same third object class as a third object region corresponding to the third object class. In some embodiments, for each picture, a third object appearing in the picture and a third object sub-region corresponding to each third object (for example, a region where the third object is located) are detected based on the YOLO algorithm. In some embodiments, one or more third objects in the picture and a third object sub-region corresponding to each third object (e.g., a region in which the third object is located) may also be determined based on image segmentation techniques (e.g., image segmentation algorithms such as resNet, VGGNet, fast, R-CNN, etc.) and image recognition techniques. In some embodiments, the third object sub-region corresponding to the third object belonging to the same third object class is taken as the third object region corresponding to the third object class, for example, if the kitten and the puppy belong to the animal class, the region where the kitten and the puppy are located is determined as the third object region corresponding to the third object class of the animal, in other words, the third object region corresponding to the "animal" third object class includes the sum of the third object sub-regions where the kitten and the puppy are located.
In some embodiments, the step S182 includes: the equipment calculates the exposure of each third object region according to the pixel information of the third object region to obtain the exposure of the third object region corresponding to each third object category in the at least one third object category in the picture. In some embodiments, after dividing the third object regions, the apparatus calculates the exposure level of each third object region based on all the pixel information in the third object region. For example, after the third target region is grayed, an average value of all pixel information in the third target region is calculated, and the average value is used as the exposure level of the third target region.
In some embodiments, the class set includes a plurality of second object classes arranged in sequence, the second initial vector includes a plurality of second components, the initial assignment of each second component is zero, the number of the plurality of second object classes is equal to the number of the plurality of second components, so that each second object class has its corresponding second component in the second initial vector, the step S183 includes: and if a second object class which is the same as the third object class exists in the class set, reassigning a second component of the second object class corresponding to the second initial vector according to the exposure of a third object area corresponding to the third object class to generate a second vector of the picture. In some embodiments, the class set further corresponds to a second initial vector, the initial assignment of each second component in the second initial vector is zero, each second object class has its corresponding second component in the second initial vector based on the arrangement order of each second object class in the class set, and the assignment of each second component in the second initial vector is determined according to a third object class and the arrangement order of the second object classes. For example, category set B includes, in order: a person, an animal, a food, an office product, a study product, a vehicle, a second object category of other categories (e.g., objects that fail to be identified or determined to fail in the image information or picture can be marked as other and categorized as other categories). The class set B corresponds to a second initial vector B, for example, the second initial vector B is [0,0,0,0,0,0,0], where the number of second object classes in the class set B is equal to the number of second components in the second initial vector B, and the number of second object classes in the class set B is 7. Based on the ranking order of the 7 second object categories, each second object category has its corresponding second component in the second initial vector, e.g., a person corresponds to the first second component in the second initial vector B, an animal corresponds to the second component in the second initial vector B, a food corresponds to the third second component in the second initial vector B, and so on. And if a second object class which is the same as the third object class exists in the class set, re-assigning a corresponding second component of the second object class in the second initial vector. The reassigned specific value is the exposure of the third object region corresponding to the third object class. For example, the category set B includes a second object category in order: the second initial vector B is [0,0,0,0,0,0,0], the third object class appearing in the picture comprises animals, office supplies and other classes, wherein the exposure level of a third object area corresponding to the animals is 80, the exposure level of a third object area corresponding to the office supplies is 100, the exposure level of a third object area corresponding to the other classes is 90, and the corresponding second component in the second initial vector B is reassigned according to the exposure level corresponding to each third object class to generate a second vector (for example, [0,80,0,100,0,0,90 ]) of the picture.
In some embodiments, the obtaining of the set of categories comprises: determining a second object included in each of the plurality of pictures to obtain a plurality of second objects; classifying the plurality of second objects according to a second object class to which each second object belongs to obtain a plurality of second object classes, wherein each second object class comprises one or more second objects; sorting the plurality of second object categories in a descending order according to the number of second objects included in each second object category to generate the category set, wherein the category set includes a plurality of second object categories arranged in sequence. In some embodiments, the set of categories is generated by counting categories of the second object that appear in the plurality of pictures. For example, a second object appearing in a large number of pictures is identified to obtain a plurality of second objects, and the plurality of second objects are classified to obtain a plurality of second object categories. And counting the number of second objects included in each second object category, and sequencing the plurality of second object categories based on the number of second objects included in each second object category to obtain a plurality of second object categories which are arranged in sequence. In some embodiments, the plurality of sequentially arranged second object categories is recorded in the category set.
In some embodiments, the step S14 further includes a step S19 (not shown), in which step S19, exposure sampling is performed on the image information based on different exposure parameters to obtain at least two pieces of spare image information; for each piece of standby image information, calculating the exposure of a first object area corresponding to each first object type in the standby image information to obtain at least two exposures corresponding to each first object area; the step S14 includes: for each first object region, determining an exposure level with the minimum difference with the target exposure level from one or more exposure levels corresponding to the first object region according to the target exposure level corresponding to the first object region; and generating a high dynamic range image of the image information according to the pixel information of the first object area in the standby image information corresponding to the exposure level. In some embodiments, the device acquires a plurality of alternate image information based on different exposure parameters for high dynamic range image generation based on the alternate image information prior to generating a high dynamic range image of the image information. For example, the image information a includes a first object: the image information A comprises a kitten, a book, a computer and a sky, wherein the kitten belongs to animals, the book and the computer belong to office supplies, the sky belongs to other categories, and one or more first objects in the image information A belong to three first object categories of animals, office supplies and the like. The first object region corresponding to the first object category of animal comprises a region where a kitten is located, the first object region corresponding to the first object category of office supplies comprises the sum of regions where a book and a computer are located, and the first object region corresponding to the first object category of other categories comprises a region where a sky is located. The apparatus acquires a plurality of pieces of standby image information on the image information A based on different exposure parameters (for example, exposure parameters such as aperture, shutter speed, ISO sensitivity), and calculates the exposure level of a first object region corresponding to each first object category in each piece of standby image information. For example, the spare image information 1, the spare image information 2, and the spare image information 3 are obtained. The exposure level of each first object region is calculated based on the pixel information in the first object region (for example, an average value of all pixel values in the first object region is calculated, and the calculated average value is taken as the exposure level of the first object region). Then for said image information a, there is 3 exposures for each first object class in the image information a. For each first object class, an exposure level with the smallest difference between the target exposure levels corresponding to the first object class is determined from the 3 exposure levels (for example, the difference between the exposure level corresponding to the first object class calculated by the backup image information 1 and the target exposure level corresponding to the first object class is smallest), and then the high dynamic range image of the image information a is generated according to the pixel information of the first object class in the first object area in the backup image information 1 in the backup image information (for example, the backup image information 1) corresponding to the exposure level. In some embodiments, the device synthesizes a high dynamic range image of the image information a by extracting first object regions in the spare image information. For example, if the difference between the exposure level corresponding to the first object category "animal" in the spare image information 1 and the target exposure level corresponding to the first object category is the smallest, the first object region of the first object category "animal" is extracted from the spare image information 1. If the difference between the exposure level corresponding to the first object category "office supplies" in the standby image information 2 and the target exposure level corresponding to the first object category is the minimum, the first object region of the first object category "office supplies" is extracted from the standby image information 2. If the difference between the exposure level corresponding to the first object class "other class" in the backup image information 3 and the exposure level of the object corresponding to the first object class is the minimum, the first object region of the first object class "other class" is extracted from the backup image information 3. The high dynamic range image of the image information a is synthesized by extracting each first object region. For another example, if the difference between the exposure level corresponding to the first object type "animal" in the backup image information 1 and the target exposure level corresponding to the first object type is the smallest, the first object area in the image information a is processed according to the pixel information of the first object area corresponding to the first object type "animal" in the backup image information 1. If the difference between the exposure level corresponding to the first object type "office supplies" in the standby image information 2 and the target exposure level corresponding to the first object type is the minimum, the first object area in the image information a is processed according to the pixel information of the first object area corresponding to the first object type "office supplies" in the standby image information 2. If the difference between the exposure level corresponding to the first object type "other type" in the backup image information 3 and the target exposure level corresponding to the first object type is the smallest, the first object area in the image information a is processed according to the pixel information of the first object area corresponding to the first object type "other type" in the backup image information 3.
FIG. 2 illustrates a block diagram of an apparatus for generating a high dynamic range image, the apparatus including a one-module, a two-module, a three-module, and a four-module, according to one aspect of the present application. The system comprises a module, a processing module and a processing module, wherein the module is used for acquiring image information to be processed, the image information comprises one or more first objects, and the one or more first objects belong to at least one first object category; a second module for generating a first target vector of the image information according to the at least one first object class; a third module, configured to input the first target vector into a vector regression model to output a second target vector of the image information, where the second target vector includes a target exposure of a first object region corresponding to each of the at least one first object class in the image information; and a fourth module for synthesizing a high dynamic range image of the image information based on the target exposure of the corresponding first object region in the image information for each first object class.
Specifically, the one-to-one module is configured to acquire image information to be processed, where the image information includes one or more first objects, and the one or more first objects belong to at least one first object category. In some embodiments, the image information includes, but is not limited to, image information captured by a viewfinder of the user device. For example, when a user device (e.g., a mobile phone) views through a viewfinder, the image information can be acquired, so that the device can analyze and process the image information. In some embodiments, the first object includes, but is not limited to, an item (e.g., a cup, a book, a computer, the sky, etc.) appearing in the image information. In some embodiments, first objects appearing in the image information are classified to determine at least one first object class to which the one or more first objects correspond. For example, a first object appears in the image information a: the image information A comprises a kitten, a book, a computer and a sky, wherein the kitten belongs to animals, the book and the computer belong to office supplies, the sky belongs to other categories, and one or more first objects in the image information A belong to three first object categories of animals, office supplies and the like.
A second module for generating a first target vector of the image information according to the at least one first object class. In some embodiments, the first target vector is generated based on the at least one first object class. In some embodiments, the first target vector comprises a plurality of first components, e.g., the first target vector is [0,1,0,1,0,0,1], where data such as "0," "1," etc. in the first target vector is the first component of the first target vector. For example, the image information a includes a first object: the cat, the desk, the computer, and the sky may be categorized, and the first object in the image information a may be obtained to belong to at least one first object category (e.g., animal, office supply, and other categories). The apparatus generates a corresponding first target vector (e.g., [0,1,0,1,0,0,1 ]) based on three first object categories of "animal, office supplies, other classes" to vectorize the image information a.
And a third module, configured to input the first target vector into a vector regression model to output a second target vector of the image information, where the second target vector includes a target exposure of a first object region corresponding to each of the at least one first object class in the image information. In some embodiments, the vector regression model is for outputting a corresponding second target vector based on the input first target vector. For example, the device may be configured to vectorize the image information based on the at least one first object category, generate the first target vector, and input the first target vector into the vector regression model to obtain a second target vector corresponding to the image information. In some embodiments, the second target vector comprises a plurality of second components, e.g., the second target vector is [0,100,0, 80,0,0,90], data in the second vector such as "80," "100," "90," "0" as the second component of the second target vector. In some embodiments, the second target vector output by the vector regression model includes a target exposure level of a first object region corresponding to each of the at least one first object class in the image information. For example, the at least one first object category to which the first object in the image information a belongs includes: the second target vector obtained from animals, office supplies and other types is [0,100,0, 80,0,0,90], the second component "100" in the second target vector is used as the target exposure of the first target area corresponding to the first object type "animal", the second component "80" is used as the target exposure of the first target area corresponding to the first object type "office supplies", and the second component "90" is used as the target exposure of the first target area corresponding to the first object type "other types". In some embodiments, the device determines, based on the arrangement order of the second components in the second target vector, a first object class corresponding to each second component, so as to determine a target exposure of a first object region corresponding to each first object class in the image information. In some embodiments, the first object region corresponding to the first object category in the image information includes a region where one or more objects belonging to the first object category are located, for example, in the image information a, the first object category office supplies include desks and computers, and the first object region corresponding to the first object category "office supplies" in the image information a includes a region corresponding to desks and computers. In some embodiments, the apparatus divides the image information into one or more regions based on a YOLO algorithm, detects a first object corresponding to each region, classifies the first object, and determines a first object region corresponding to each first object category (for example, a region in which a first object belonging to the same first object category is located is divided into the first object region corresponding to the first object category). In some embodiments, the image information may also be segmented into one or more regions based on image segmentation techniques (e.g., image segmentation algorithms such as resNet, VGGNet, fast, R-CNN, etc.), and then first objects corresponding to each region are identified (e.g., identified based on image identification techniques) to classify the first objects and determine first object regions corresponding to each first object class. Of course, those skilled in the art will appreciate that the above specific operations for identifying and segmenting the image information are merely examples, and other specific operations now or later that may occur are also within the scope of the present application and are incorporated by reference herein. In some embodiments, the exposure level of the object region is calculated based on pixel information within the object region. For example, after graying the target region, the average value of all pixel information in the target region is calculated to obtain the exposure level of the target region. In some embodiments, the exposure effect is the best when the exposure level of the first object region is the exposure level of the corresponding object.
And a fourth module for synthesizing a high dynamic range image of the image information based on the target exposure of the corresponding first object region in the image information for each first object class. After the target exposure of each first object area is obtained, the high dynamic range image of the image information is synthesized based on the target exposure, and the image characteristics of the image information are fully considered, so that the finally obtained high dynamic range image is more real and has better effect.
In some embodiments, the one-to-one module is configured to obtain image information to be processed; determining one or more first objects appearing in the image information; determining at least one first object class to which the one or more first objects belong according to the first object class to which each of the one or more first objects belongs.
Here, the specific implementation manner corresponding to the one-to-one module is the same as or similar to the specific implementation manner of step S11, and thus is not described again and is included herein by way of reference.
In some embodiments, the second module is configured to determine, from the at least one first object class and a set of classes, an assignment of each first component in a first initial vector to generate a first target vector of the image information, wherein the first initial vector corresponds to the set of classes. In some embodiments, the first target vector comprises a plurality of first components, and the device generates the first target vector of the image information by assigning values to the first initial vector.
Here, the specific implementation corresponding to the first and second modules is the same as or similar to the specific implementation of the step S12, and thus is not repeated here, and is included herein by way of reference.
In some embodiments, the set of classes includes a plurality of sequentially arranged second object classes, the first initial vector includes a plurality of first components, a number of the plurality of second object classes is equal to a number of the plurality of first components, each of the plurality of second object classes has its corresponding first component in the first initial vector based on an order of arrangement of the plurality of second object classes, and an initial assignment of each first component is zero; the first and second modules are configured to: and if a second object class which is the same as the first object class exists in the class set, re-assigning a first component of the second object class corresponding to the first initial vector according to target assignment to generate a first target vector of the image information.
Here, the specific implementation corresponding to the first and second modules is the same as or similar to the specific implementation of the step S12, and thus is not repeated here, and is included herein by way of reference.
In some embodiments, the second module is configured to sequentially detect whether a first object class identical to the second object class exists in the one or more first object classes according to an arrangement order of a plurality of second object classes in the class set, and if the first object class exists, reassign a first component of the second object class corresponding to the first initial vector according to a target assignment value to generate a first target vector of the image information.
Here, the specific implementation corresponding to the first and second modules is the same as or similar to the specific implementation of the step S12, and thus is not repeated here, and is included herein by way of reference.
In some embodiments, the second target vector includes a number of second components equal to a number of the second object classes, each of the second object classes has its corresponding second component in the second target vector based on an order of arrangement of the second object classes, and the apparatus further includes a fifth module (not shown). And the device is used for assigning, for each of at least one first object class in the image information, a second component of a second object class which is the same as the first object class and corresponds to the second object class in the second target vector as the target exposure of the first object region corresponding to the first object class in the image information.
Here, the specific implementation manner corresponding to the fifth module is the same as or similar to the specific implementation manner of the step S15, and thus is not repeated here and is included herein by way of reference.
In some embodiments, the apparatus further comprises a sixth module (not shown) for constructing the vector regression model from the first vector and the second vector of the plurality of pictures.
Here, the specific implementation corresponding to the six modules is the same as or similar to the specific implementation of the step S16, and thus is not repeated here, and is included herein by way of reference.
In some embodiments, the apparatus further comprises a seven module (not shown) for generating, for each of the plurality of pictures, a first vector for the picture according to at least one third object class to which one or more third objects appearing in the picture belong; and the eight module is used for generating a second vector of the picture according to the exposure level of the third object area corresponding to each third object category in the at least one third object category in the picture.
Here, the specific implementation manners corresponding to the one seven module and the one eight module are the same as or similar to the specific implementation manners of the step S17 and the step S18, and thus are not repeated herein and are included herein by reference.
In some embodiments, the one-seven module is to: for each of the multiple pictures, determining an assignment of each first component in a first initial vector according to at least one third object category to which one or more third objects appearing in the picture belong and a category set to generate a first vector of the picture, wherein the first initial vector corresponds to the category set.
Here, the specific implementation manner corresponding to the one-seven module is the same as or similar to the specific implementation manner of the step S17, and thus is not repeated here and is included herein by way of reference.
In some embodiments, the set of classes includes a plurality of second object classes arranged in sequence, the first initial vector includes a plurality of first components, the number of the plurality of second object classes is equal to the number of the plurality of first components, such that each second object class has its corresponding first component in the first initial vector, and the initial assignment of each first component is zero; the one-seven module is used for: if a second object class identical to the third object class exists in the class set, the device reassigns a first component of the second object class corresponding to the first initial vector according to a target assignment to generate a first vector of the picture.
Here, the specific implementation manner corresponding to the one-seven module is the same as or similar to the specific implementation manner of the step S17, and thus is not repeated here and is included herein by way of reference.
In some embodiments, the eight module includes an eight-one module (not shown), an eight-two module, and an eight-three module. An eighty-one module, configured to determine a third object region corresponding to each of at least three third object categories in the picture; an eighty-two module, configured to calculate an exposure level of each third object region, so as to obtain an exposure level of a third object region corresponding to each third object category in the at least one third object category in the picture; and an eighty-three module, configured to determine, according to the exposure level of the third object region corresponding to each of the at least one third object class in the picture and a class set, an assignment of each second component in a second initial vector to generate a second vector of the picture, where the second initial vector corresponds to the class set.
Here, the specific implementation manners of the one-eight-one module, the one-eight-two module, and the one-eight-three module are the same as or similar to the specific implementation manners of the step S181, the step S182, and the step S183, and therefore, the detailed descriptions thereof are omitted, and the description thereof is incorporated herein by reference.
In some embodiments, the eighty-one module is configured to determine one or more third objects appearing in the picture and a third object sub-region corresponding to each third object; and taking a third object sub-region corresponding to a third object belonging to the same third object class as a third object region corresponding to the third object class.
Here, the specific implementation manner corresponding to the one-eight-one module is the same as or similar to the specific implementation manner of the step S181, and thus is not repeated here, and is included herein by way of reference.
In some embodiments, the eighty-two module is configured to calculate an exposure level of each third object region according to the pixel information of the third object region, so as to obtain an exposure level of the third object region corresponding to each third object category in the at least one third object category in the picture.
Here, the specific implementation manner corresponding to the one-eight-two module is the same as or similar to the specific implementation manner of the step S182, and thus is not repeated here, and is included herein by way of reference.
In some embodiments, the class set includes a plurality of second object classes arranged in sequence, the second initial vector includes a plurality of second components, an initial assignment of each second component is zero, the number of the plurality of second object classes is equal to the number of the plurality of second components, such that each second object class has its corresponding second component in the second initial vector, the eightieth module is configured to: and if a second object class which is the same as the third object class exists in the class set, reassigning a second component of the second object class corresponding to the second initial vector according to the exposure of a third object area corresponding to the third object class to generate a second vector of the picture.
Here, the specific implementation manner corresponding to the one-eight-three module is the same as or similar to the specific implementation manner of the step S183, and thus is not repeated here, and is included herein by way of reference.
In some embodiments, the obtaining of the set of categories comprises: determining a second object included in each of the plurality of pictures to obtain a plurality of second objects; classifying the plurality of second objects according to a second object class to which each second object belongs to obtain a plurality of second object classes, wherein each second object class comprises one or more second objects; sorting the plurality of second object categories in a descending order according to the number of second objects included in each second object category to generate the category set, wherein the category set includes a plurality of second object categories arranged in sequence. In some embodiments, the set of categories is generated by counting categories of the second object that appear in the plurality of pictures. For example, a second object appearing in a large number of pictures is identified to obtain a plurality of second objects, and the plurality of second objects are classified to obtain a plurality of second object categories. And counting the number of second objects included in each second object category, and sequencing the plurality of second object categories based on the number of second objects included in each second object category to obtain a plurality of second object categories which are arranged in sequence. In some embodiments, the plurality of sequentially arranged second object categories is recorded in the category set.
In some embodiments, the apparatus further comprises a nine module (not shown) for exposure sampling the image information based on different exposure parameters to obtain at least two spare image information; for each piece of standby image information, calculating the exposure of a first object area corresponding to each first object type in the standby image information to obtain at least two exposures corresponding to each first object area; the four modules are used for: for each first object region, determining an exposure level with the minimum difference with the object exposure level from one or more exposure levels corresponding to the first object region according to the object exposure level corresponding to the first object region; and generating a high dynamic range image of the image information according to the pixel information of the first object area in the standby image information corresponding to the exposure level.
Here, the specific implementation corresponding to the nine modules and the four modules is the same as or similar to the specific implementation of the step S19 and the step S14, and thus is not repeated here, and is included herein by way of reference.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as illustrated in FIG. 3, the system 300 can be implemented as any of the devices in each of the described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Additionally, some portions of the present application may be applied as a computer program product, such as computer program instructions, which, when executed by a computer, may invoke or provide the method and/or solution according to the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. In this regard, computer readable media can be any available computer readable storage media or communication media that can be accessed by a computer.
Communication media includes media whereby communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital, or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not to denote any particular order.

Claims (16)

1. A method for generating a high dynamic range image, wherein the method comprises:
acquiring image information to be processed, wherein the image information comprises one or more first objects, and the one or more first objects belong to at least one first object category;
generating a first target vector of the image information according to the at least one first object class;
inputting the first target vectors into a vector regression model to output second target vectors of the image information, wherein the second target vectors comprise target exposure of first object regions corresponding to each first object category in the at least one first object category in the image information;
synthesizing a high dynamic range image of the image information according to the target exposure of the first object region corresponding to each first object category in the image information;
wherein the generating a first target vector of the image information according to the at least one first object class comprises:
determining an assignment of each first component in a first initial vector according to the at least one first object category and a category set to generate a first target vector of the image information, wherein the first initial vector corresponds to the category set;
wherein the class set comprises a plurality of second object classes arranged in sequence, the first initial vector comprises a plurality of first components, the number of the plurality of second object classes is equal to the number of the plurality of first components, and each of the plurality of second object classes has its corresponding first component in the first initial vector;
the determining, according to the at least one first object class and the class set, an assignment of each first component in a first initial vector to generate a first target vector of the image information includes:
and if a second object class which is the same as the first object class exists in the class set, re-assigning a first component of the second object class corresponding to the first initial vector according to target assignment to generate a first target vector of the image information.
2. The method of claim 1, wherein the obtaining image information to be processed, wherein the image information includes one or more first objects belonging to at least one first object category comprises:
acquiring image information to be processed;
determining one or more first objects appearing in the image information;
determining at least one first object class to which the one or more first objects belong according to the first object class to which each of the one or more first objects belongs.
3. The method of claim 1, wherein the initial assignment of each of the plurality of first components is zero.
4. The method of claim 3, wherein, if there is a second object class in the class set that is the same as the first object class, re-assigning a corresponding first component of the second object class in the first initial vector according to a target assignment to generate a first target vector of the image information, comprises:
and sequentially detecting whether a first object class identical to the second object class exists in the one or more first object classes according to the arrangement sequence of the plurality of second object classes in the class set, and if so, re-assigning a first component corresponding to the second object class in the first initial vector according to a target assigned value to generate a first target vector of the image information.
5. The method of claim 1, wherein the second target vector includes a number of second components equal to a number of the second object classes, each of the second object classes having its corresponding second component in the second target vector based on an order of arrangement of the second object classes, the method further comprising:
and for each of at least one first object class in the image information, assigning the value of a corresponding second component of a second object class which is the same as the first object class in the second target vector as the target exposure of the first object region corresponding to the first object class in the image information.
6. The method of claim 1, wherein the method further comprises:
and constructing the vector regression model according to the first vector and the second vector of the plurality of pictures.
7. The method of claim 6, wherein the method further comprises:
for each picture in the plurality of pictures, generating a first vector of the picture according to at least one third object category to which one or more third objects appearing in the picture belong;
and generating a second vector of the picture according to the exposure level of the third object area corresponding to each of the at least one third object category in the picture.
8. The method of claim 7, wherein for each of the plurality of pictures, generating the first vector of the picture according to at least one third object class to which one or more third objects appearing in the picture belong comprises:
for each picture in the plurality of pictures, determining an assignment of each first component in a first initial vector according to at least one third object category to which one or more third objects appearing in the picture belong and a category set to generate a first vector of the picture, wherein the first initial vector corresponds to the category set.
9. The method of claim 8, wherein the initial assignment of each of the plurality of first components is zero;
for each of the multiple pictures, determining an assignment of each first component in a first initial vector according to at least one third object category to which one or more third objects appearing in the picture belong and a category set to generate a first vector of the picture, where the first initial vector corresponds to the category set, and the method includes:
and if a second object class which is the same as the third object class exists in the class set, re-assigning a first component of the second object class corresponding to the first initial vector according to target assignment to generate a first vector of the picture.
10. The method of claim 7, wherein the generating the second vector of the picture according to the exposure level of the corresponding third object region in the picture for each of the at least one third object class comprises:
determining a third object region corresponding to each of at least three third object categories in the picture;
calculating the exposure of each third object region to obtain the exposure of the third object region corresponding to each third object category in the at least one third object category in the picture;
determining the assignment of each second component in a second initial vector according to the exposure of a corresponding third object region in the picture of each of the at least one third object class and a class set to generate a second vector of the picture, wherein the second initial vector corresponds to the class set;
wherein the class set further comprises a plurality of second object classes arranged in sequence, the second initial vector comprises a plurality of second components, the initial assignment of each second component is zero, and the number of the plurality of second object classes is equal to the number of the plurality of second components, so that each second object class has its corresponding second component in the second initial vector;
determining, according to the exposure level of the third object region corresponding to each of the at least one third object class in the picture and a class set, an assignment of each second component in a second initial vector to generate a second vector of the picture, where the second initial vector corresponds to the class set, and the determining includes:
and if a second object class which is the same as the third object class exists in the class set, reassigning a second component of the second object class corresponding to the second initial vector according to the exposure of a third object area corresponding to the third object class to generate a second vector of the picture.
11. The method of claim 10, wherein the determining the corresponding third object region in the picture for each of the at least three third object categories in the picture comprises:
determining one or more third objects appearing in the picture and a third object sub-region corresponding to each third object;
and taking a third object sub-region corresponding to a third object belonging to the same third object category as a third object region corresponding to the third object category.
12. The method of claim 10, wherein the calculating the exposure level of each third object region to obtain the exposure level of the corresponding third object region in the picture for each of the at least one third object class comprises:
and calculating the exposure of the third object region according to the pixel information of each third object region to obtain the exposure of the third object region corresponding to each third object category in the at least one third object category in the picture.
13. The method according to any one of claims 3 to 12, wherein the acquisition process of the set of categories comprises:
determining a second object included in each of the plurality of pictures to obtain a plurality of second objects;
classifying the plurality of second objects according to a second object class to which each second object belongs to obtain a plurality of second object classes, wherein each second object class comprises one or more second objects;
sorting the plurality of second object categories in a descending order according to the number of second objects included in each second object category to generate the category set, wherein the category set includes a plurality of second object categories arranged in sequence.
14. The method of claim 1, wherein the method further comprises, prior to synthesizing a high dynamic range image of the image information according to the target exposure of the corresponding first object region in the image information for each first object class:
carrying out exposure sampling on the image information based on different exposure parameters to obtain at least two pieces of standby image information; for each piece of standby image information, calculating the exposure of a first object area corresponding to each first object type in the standby image information to obtain at least two exposures corresponding to each first object area;
the synthesizing of the high dynamic range image of the image information according to the target exposure of the first object region corresponding to each first object category in the image information includes:
for each first object region, determining an exposure level with the minimum difference with the target exposure level from one or more exposure levels corresponding to the first object region according to the target exposure level corresponding to the first object region; and generating a high dynamic range image of the image information according to the pixel information of the first object area in the standby image information corresponding to the exposure level.
15. An apparatus for generating a high dynamic range image, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the method of any one of claims 1 to 14.
16. A computer-readable medium storing instructions that, when executed, cause a system to perform operations to perform a method as recited in any of claims 1-14.
CN202011621503.2A 2020-12-30 2020-12-30 Method and equipment for generating high dynamic range image Active CN112839181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011621503.2A CN112839181B (en) 2020-12-30 2020-12-30 Method and equipment for generating high dynamic range image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011621503.2A CN112839181B (en) 2020-12-30 2020-12-30 Method and equipment for generating high dynamic range image

Publications (2)

Publication Number Publication Date
CN112839181A CN112839181A (en) 2021-05-25
CN112839181B true CN112839181B (en) 2022-10-11

Family

ID=75925801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011621503.2A Active CN112839181B (en) 2020-12-30 2020-12-30 Method and equipment for generating high dynamic range image

Country Status (1)

Country Link
CN (1) CN112839181B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485274A (en) * 2016-10-09 2017-03-08 湖南穗富眼电子科技有限公司 A kind of object classification method based on target property figure
CN109791688A (en) * 2016-06-17 2019-05-21 华为技术有限公司 Expose relevant luminance transformation
CN110087003A (en) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 More exposure image fusion methods
CN110100252A (en) * 2016-12-23 2019-08-06 奇跃公司 For determining the technology of the setting of content capture equipment
CN110445989A (en) * 2019-08-05 2019-11-12 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
JP6696095B1 (en) * 2018-11-07 2020-05-20 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Image processing device, imaging device, image processing method, and program
CN111527743A (en) * 2017-12-28 2020-08-11 伟摩有限责任公司 Multiple modes of operation with extended dynamic range
CN111918601A (en) * 2018-03-30 2020-11-10 依视路国际公司 Method and system for characterizing an object vision system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10567723B2 (en) * 2017-08-11 2020-02-18 Samsung Electronics Co., Ltd. System and method for detecting light sources in a multi-illuminated environment using a composite RGB-IR sensor

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791688A (en) * 2016-06-17 2019-05-21 华为技术有限公司 Expose relevant luminance transformation
CN106485274A (en) * 2016-10-09 2017-03-08 湖南穗富眼电子科技有限公司 A kind of object classification method based on target property figure
CN110100252A (en) * 2016-12-23 2019-08-06 奇跃公司 For determining the technology of the setting of content capture equipment
CN111527743A (en) * 2017-12-28 2020-08-11 伟摩有限责任公司 Multiple modes of operation with extended dynamic range
CN111918601A (en) * 2018-03-30 2020-11-10 依视路国际公司 Method and system for characterizing an object vision system
JP6696095B1 (en) * 2018-11-07 2020-05-20 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Image processing device, imaging device, image processing method, and program
CN110087003A (en) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 More exposure image fusion methods
CN110445989A (en) * 2019-08-05 2019-11-12 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN112839181A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
US10936919B2 (en) Method and apparatus for detecting human face
US11409992B2 (en) Data slicing for machine learning performance testing and improvement
CN111512344A (en) Generating synthetic depth images from CAD data using enhanced generative antagonistic neural networks
US20220036525A1 (en) Determining image defects using image comparisons
US11468296B2 (en) Relative position encoding based networks for action recognition
CN109377508B (en) Image processing method and device
CN110827236B (en) Brain tissue layering method, device and computer equipment based on neural network
JP2020038640A (en) Method and system for automatically classifying images
US10997748B2 (en) Machine learning model development with unsupervised image selection
TWI546743B (en) Object selection in an image
CN111124863B (en) Intelligent device performance testing method and device and intelligent device
CN113505848A (en) Model training method and device
US11348254B2 (en) Visual search method, computer device, and storage medium
Sfikas et al. Quaternion generative adversarial networks for inscription detection in byzantine monuments
CN108038473B (en) Method and apparatus for outputting information
CN114332553A (en) Image processing method, device, equipment and storage medium
CN114064079A (en) Packing method and device of algorithm application element, equipment and storage medium
CN112839181B (en) Method and equipment for generating high dynamic range image
CN112822425B (en) Method and equipment for generating high dynamic range image
CN110222652A (en) Pedestrian detection method, device and electronic equipment
CN112822426B (en) Method and equipment for generating high dynamic range image
CN111242117A (en) Detection device and method, image processing device and system
US11216911B2 (en) Device manufacturing cycle time reduction using machine learning techniques
CN114663714A (en) Image classification and ground object classification method and device
Bisiach et al. Evaluating methods for optical character recognition on a mobile platform: comparing standard computer vision techniques with deep learning in the context of scanning prescription medicine labels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant