CN114071024A - Image shooting method, neural network training method, device, equipment and medium - Google Patents

Image shooting method, neural network training method, device, equipment and medium Download PDF

Info

Publication number
CN114071024A
CN114071024A CN202111423960.5A CN202111423960A CN114071024A CN 114071024 A CN114071024 A CN 114071024A CN 202111423960 A CN202111423960 A CN 202111423960A CN 114071024 A CN114071024 A CN 114071024A
Authority
CN
China
Prior art keywords
photographing
regions
parameters
neural network
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111423960.5A
Other languages
Chinese (zh)
Inventor
魏胜禹
杜宇宁
董水龙
崔程
郭若愚
陆彬
郜廷权
刘其文
胡晓光
于佃海
马艳军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111423960.5A priority Critical patent/CN114071024A/en
Publication of CN114071024A publication Critical patent/CN114071024A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image shooting method, a neural network training method, a device, equipment and a medium, and relates to the field of computers, in particular to computer vision, image processing and deep learning technologies. The method comprises the following steps: performing semantic segmentation on the preview image to obtain a plurality of regions; determining a photographing parameter for each of a plurality of regions; and instructing, for each of the plurality of regions, a photosensitive unit corresponding to the region in the photographing apparatus to perform photographing according to the photographing parameters for the region.

Description

Image shooting method, neural network training method, device, equipment and medium
Technical Field
The present disclosure relates to the field of computers, and in particular, to computer vision, image processing, and deep learning technologies, and in particular, to an image capturing method, a neural network training method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. The artificial intelligence hardware technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like, and the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge graph technology and the like.
When a digital image sensor (for example, a Complementary Metal Oxide Semiconductor (CMOS) or a Charge-Coupled Device (CCD)) images, parameters such as sensitivity of the sensor can be dynamically adjusted by controlling the size of an aperture, the operating time of the sensor, the gain of an electric signal, and the like, so as to adapt to scenes with different brightness and better record image information.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides an image capturing method, a neural network training method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided an image photographing method. The method comprises the following steps: performing semantic segmentation on the preview image to obtain a plurality of regions; determining a photographing parameter for each of a plurality of regions; and instructing, for each of the plurality of regions, a photosensitive unit corresponding to the region in the photographing apparatus to perform photographing according to the photographing parameters for the region.
According to an aspect of the present disclosure, a method of training a neural network is provided. The method comprises the following steps: performing semantic segmentation on the preview image to obtain a plurality of areas and respective types of the areas; acquiring a plurality of reference images obtained by shooting scenes shot by preview images for a plurality of times according to a plurality of shooting parameters; determining, for a sample region of the plurality of regions, a target photographing parameter of the sample region among the plurality of photographing parameters based on the plurality of reference images; inputting the sample region and the type of the sample region in the preview image into a neural network to obtain a predicted shooting parameter output by the neural network; and training the neural network based on the target shooting parameters and the predicted shooting parameters.
According to another aspect of the present disclosure, an image photographing apparatus is provided. The device includes: a semantic segmentation unit configured to perform semantic segmentation on the preview image to obtain a plurality of regions; a parameter determination unit configured to determine a shooting parameter for each of a plurality of areas; and an instructing unit configured to instruct, for each of the plurality of areas, a light-sensing unit corresponding to the area in the photographing apparatus to perform photographing according to the photographing parameters for the area.
According to another aspect of the present disclosure, a training apparatus of a neural network is provided. The device includes: a semantic segmentation unit configured to perform semantic segmentation on the preview image to obtain a plurality of regions and respective types of the plurality of regions; an acquisition unit configured to acquire a plurality of reference images obtained by shooting a scene where the preview image is shot a plurality of times in accordance with a plurality of shooting parameters; a determination unit configured to determine, for a sample region among the plurality of regions, a target photographing parameter of the sample region among the plurality of photographing parameters based on the plurality of reference images; a neural network configured to receive the sample region and a type of the sample region in the preview image to output a predicted photographing parameter; and a training unit configured to train the neural network based on the target shooting parameters and the predicted shooting parameters.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the image capture method or the neural network training method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the above image capturing method or neural network training method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program, when executed by a processor, implements the above-described image capturing method or neural network training method.
According to one or more embodiments of the disclosure, a preview image is segmented by using a semantic segmentation mode, shooting parameters are determined for each segmented area, and then shooting is performed by using corresponding shooting parameters in each area, so that an image with a bright part and a dark part capable of being correctly imaged can be obtained by only shooting one image, and a large amount of occupation of a storage space is avoided. In addition, as the shooting parameters in each semantic segmentation area are uniform, the problem of poor image appearance caused by using different shooting parameters in different parts of the same object is avoided.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
fig. 2 illustrates a flowchart of an image capturing method according to an exemplary embodiment of the present disclosure;
FIG. 3 shows a flow chart of a method of training a neural network according to an exemplary embodiment of the present disclosure;
fig. 4 illustrates a block diagram of a structure of an image photographing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 5 shows a block diagram of a training apparatus for a neural network according to an exemplary embodiment of the present disclosure; and
FIG. 6 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
In the related art, the sensitivity of the conventional image sensor is globally uniform, that is, only one fixed sensitivity can be set for each image. In a light-complex scene, some areas require higher sensitivity and other areas require lower sensitivity, and it is difficult to record global information using a single uniform light-sensitive. For example, if a lower sensitivity is used, the bright area can be imaged correctly, but the dark area is under-exposed; conversely, dark areas can be imaged correctly, while bright areas are overexposed.
In order to solve the problems, the preview image is segmented in a semantic segmentation mode, the shooting parameters are determined for each segmented area, and then the corresponding shooting parameters are used for shooting in each area, so that the image with the correct imaging of the bright part and the dark part can be obtained only by shooting one image, and a large amount of storage space is avoided. In addition, as the shooting parameters in each semantic segmentation area are uniform, the problem of poor image appearance caused by using different shooting parameters in different parts of the same object is avoided.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the execution of an image capture method or a training method of a neural network.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
A user may use client devices 101, 102, 103, 104, 105, and/or 106 to take an image or capture a preview image. The client device may provide an interface that enables a user of the client device to interact with the client device, e.g., the client may obtain a preview image via the capture device and send the preview image to the server. The client device may also output information to the user via the interface, e.g., the client may output to the user a final image captured by the capturing device according to the capturing parameters sent back by the server. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptops), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various Mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
According to an aspect of the present disclosure, there is provided an image photographing method. As shown in fig. 2, the image photographing method includes: step S201, performing semantic segmentation on the preview image to obtain a plurality of regions; step S202, determining shooting parameters for each of a plurality of areas; and step S203, for each of the plurality of areas, instructing a photosensitive unit corresponding to the area in the photographing apparatus to perform photographing according to the photographing parameters for the area.
Therefore, the preview image is segmented by using a semantic segmentation mode, the shooting parameters are determined for each segmented area, and then the corresponding shooting parameters are used for shooting in each area, so that the image with the correct imaging of the bright part and the dark part can be obtained by only shooting one image, and a large amount of occupation of the storage space is avoided. In addition, as the shooting parameters in each semantic segmentation area are uniform, the problem of poor image appearance caused by using different shooting parameters in different parts of the same object is avoided.
The photographing device may be, for example, a dedicated image capturing device, such as various portable/fixed cameras, video cameras, and the like, or an image capturing unit in an electronic device, such as a camera on a mobile phone or a tablet computer, and is not limited herein.
The photographing apparatus has a certain number of light sensing units, such as pixels in a digital image sensor, such as a CMOS or CCD. These light-sensing units have specific light-sensing parameters at the time of shooting, and may include, for example, light-sensing time and sensitivity. In some examples, the photosensing time may be expressed in terms of a time interval between an on time and an off time of the photosensing unit, for example, and the sensitivity may be adjusted by controlling an electric signal gain of the photosensing unit.
In addition to the above-described light sensing parameters, other photographing parameters such as an aperture may be adjusted at the time of photographing to control the exposure amount. According to some embodiments, the photographing parameter may include at least one of sensitivity, a light sensing time, and an aperture. Different shooting parameter settings may be applicable to scenes of different brightness.
Before the capture, a preview image of the scene to be captured may be acquired. According to some embodiments, the preview image may be determined at the time of focusing or at the time of photometry. In general, photographing occurs within a short time after focusing or metering, and thus capturing a preview image at the time of focusing or metering may make the scene of the preview image close to the scene finally photographed. It is understood that the time of the semantic segmentation process and the shooting parameter determination process for the preview image may be longer, and therefore, the preview image may be acquired at an earlier time, which is not limited herein.
According to some embodiments, the semantic segmentation of the preview image in step S201 may include, for example: and inputting the preview image into the trained semantic segmentation neural network to obtain a semantic segmentation result output by the semantic segmentation neural network.
Semantic segmentation is an image processing method that divides regions of an image that have different semantics. The semantic segmentation results for an image may include different objects, or different parts of objects, in the image. In one example, the semantic segmentation result for an image of a beach sea scene may include, for example, a sky region, a beach region, a sea surface region, a far mountain region, and the like, and may further include regions corresponding to people and objects on the beach. In another example, the semantic segmentation result for one driving road surface image may include, for example, a vehicle region in which all adjacent vehicles are included, or may include a plurality of vehicle regions that distinguish each vehicle. In addition, the semantic segmentation result may only include a plurality of regions, may also include respective types of the plurality of regions, and may further include respective confidence levels of the plurality of regions. These variations can be achieved by training the semantic segmentation neural network using different training sample sets or different real labels.
In some embodiments, the semantic segmentation of the preview image is performed on the terminal (e.g., cell phone), so a lighter-weight semantic segmentation neural network may be used. It will be appreciated that those skilled in the art can select or design a suitable semantic segmentation neural network according to requirements for accuracy, computational power, or other aspects, and is not limited herein.
According to some embodiments, the step S202 of determining the photographing parameters for each of the plurality of regions may include: for each of the plurality of regions, a shooting parameter is determined according to the type of the region. Therefore, the shooting parameters of each area are determined according to the type of the area, so that the final imaging result of the area conforms to semantic logic. In some examples, two different types of areas having the same photometric result may have different shooting parameters.
According to some embodiments, the interest type may also be preset or dynamically determined in different region types, and the region of the interest type and the region of the non-interest type are processed differently.
According to some embodiments, determining the photographing parameters according to the type of each region may include, for example: and inputting the area and the type of the area in the preview image into the trained neural network to acquire shooting parameters output by the neural network. Thus, by using the trained neural network, more appropriate imaging parameters can be obtained.
In some embodiments, the neural network used to determine the shooting parameters may be trained using a large number of artificially labeled images, as will be described below.
According to some embodiments, the step S202 of determining the photographing parameters for each of the plurality of regions may further include: for each of the plurality of areas, a shooting parameter is determined at least from a photometric result for the area. Thus, by determining the photographing parameters according to the photometric result of each region, the region can be made to have correct luminance in the finally photographed image.
After the photographing parameters for each region are determined, the determined photographing parameters may be transmitted to the light-sensing units (e.g., each pixel) in the corresponding region and instruct the pixels to perform photographing according to the photographing parameters for the region.
According to some embodiments, the photosensitive unit corresponding to each area simultaneously performs photographing according to the photographing parameters for the area. That is, all the light sensing units in the photographing apparatus may photograph the scene of the preview image according to the received photographing parameters at the same time. Therefore, the image with the normal imaging of the bright part and the dark part can be obtained through single-sheet imaging, and the occupation of a large amount of storage space is avoided.
After the shooting, the final imaging image, the semantic segmentation result and the shooting parameters of each region can be recorded for subsequent viewing. In addition, these data may also be used to further train the neural network used to determine the parameters.
According to another aspect of the present disclosure, a method of training a neural network is provided. As shown in fig. 3, the training method includes: step S301, performing semantic segmentation on the preview image to obtain a plurality of regions and respective types of the plurality of regions; step S302, acquiring a plurality of reference images obtained by shooting scenes shot by preview images for a plurality of times according to a plurality of shooting parameters; step S303 of determining, for a sample region among the plurality of regions, a target photographing parameter of the sample region among the plurality of photographing parameters based on the plurality of reference images; step S304, inputting the sample region and the type of the sample region in the preview image into a neural network to obtain a predicted shooting parameter output by the neural network; and step S305, training the neural network based on the target shooting parameters and the predicted shooting parameters.
Therefore, the scenes of the preview images are shot for multiple times by using the multiple shooting parameters, the images of the preview scenes and different areas in the preview scenes under different shooting parameters can be obtained, the most appropriate target shooting parameters can be determined in the multiple shooting parameters according to the images, and the area images, the area types and the target shooting parameters are used as samples to train the neural network so as to obtain the neural network capable of outputting the most appropriate shooting parameters for different types of different areas.
According to some embodiments, the plurality of reference images obtained by photographing the scene photographed by the preview image a plurality of times according to the plurality of photographing parameters may be, for example, a plurality of reference images in different exposure situations. For each region, the best-imaged image can be determined from the reference images according to a histogram or according to a specific algorithm, and the corresponding shooting parameters are determined as the target shooting parameters for the region. In some embodiments, the optimal target shooting parameter among a plurality of shooting parameters for each region may also be determined using manual labeling.
It will be appreciated that one skilled in the art may use existing neural networks for processing images or self-designed neural networks as the neural network for determining parameters described above, and may further train it using the methods described above.
According to some embodiments, the sample region may be one region or may be multiple regions. When the sample regions are a plurality of regions, the target photographing parameters may be determined for each of the sample regions, and the regions, the region types, and the target photographing parameters in the preview image corresponding to each of the plurality of sample regions may be used as samples to train the neural network.
According to another aspect of the present disclosure, there is also provided an image photographing apparatus. As shown in fig. 4, the image photographing device 400 includes: a semantic segmentation unit 410 configured to perform semantic segmentation on the preview image to obtain a plurality of regions; a parameter determination unit 420 configured to determine a shooting parameter for each of the plurality of areas; and an instructing unit 430 configured to instruct, for each of the plurality of areas, a photosensitive unit corresponding to the area in the photographing apparatus to perform photographing according to the photographing parameters for the area.
The operations of the units 410-430 of the image capturing apparatus 400 are similar to the operations of the steps S201-S203 of the image capturing method in fig. 2, and are not described herein again.
According to some embodiments, the photographing parameter may include at least one of sensitivity, a light sensing time, and an aperture.
According to some embodiments, the preview image may be determined while focusing or metering.
According to some embodiments, the semantic segmentation unit 410 may be further configured to semantically segment the preview image to obtain the plurality of regions and respective types of the plurality of regions. The parameter determination unit 420 may be further configured to determine, for each of the plurality of regions, a photographing parameter according to a type of the region.
According to some embodiments, determining the photographing parameters according to the type of the region may include: and inputting the area and the type of the area in the preview image into the trained neural network to acquire shooting parameters output by the neural network.
According to some embodiments, the parameter determination unit 420 may be further configured to determine, for each of the plurality of areas, a shooting parameter at least from a photometric result for the area.
According to some embodiments, the photosensitive unit corresponding to each area simultaneously performs photographing according to the photographing parameters for the area.
According to another aspect of the present disclosure, a training apparatus of a neural network is also provided. As shown in fig. 5, the training apparatus 500 includes: a semantic segmentation unit 510 configured to perform semantic segmentation on the preview image to obtain a plurality of regions and respective types of the plurality of regions; an acquisition unit 520 configured to acquire a plurality of reference images obtained by photographing a scene photographed by a preview image a plurality of times according to a plurality of photographing parameters; a determination unit 530 configured to determine, for a sample region among the plurality of regions, a target photographing parameter of the sample region among the plurality of photographing parameters based on the plurality of reference images; a neural network 540 configured to receive the sample region and the type of the sample region in the preview image to output a predicted photographing parameter; and a training unit 550 configured to train the neural network based on the target photographing parameters and the predicted photographing parameters.
The operations of the units 510-550 of the training apparatus 500 for neural network are similar to the operations of the steps S301-S305 of the training method for neural network, and are not repeated herein.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 6, a block diagram of a structure of an electronic device 600, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the device 600, and the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 608 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as a bluetooth (TM) device, an 802.6 device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as an image capturing method or a training method of a neural network. For example, in some embodiments, the image capture method or the training method of the neural network may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the image capturing method or the training method of the neural network described above may be performed. Alternatively, in other embodiments, the calculation unit 601 may be configured by any other suitable means (e.g. by means of firmware) to perform an image capturing method or a training method of a neural network.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (19)

1. An image capturing method comprising:
performing semantic segmentation on the preview image to obtain a plurality of regions;
determining a photographing parameter for each of the plurality of regions; and
for each of the plurality of regions, instructing a photosensitive unit corresponding to the region in the photographing apparatus to perform photographing according to the photographing parameters for the region.
2. The method of claim 1, wherein the semantically segmenting the preview image to obtain a plurality of regions comprises:
semantically segmenting the preview image to obtain the plurality of regions and respective types of the plurality of regions,
and wherein the determining the photographing parameters for each of the plurality of regions comprises:
for each of the plurality of regions, the photographing parameters are determined according to the type of the region.
3. The method of claim 2, wherein the determining the photographing parameters according to the type of the region comprises:
inputting the area and the type of the area in the preview image into a trained neural network to obtain the shooting parameters output by the neural network.
4. The method of any one of claims 1-3, wherein the determining the shooting parameters for each of the plurality of regions comprises:
for each of the plurality of areas, the shooting parameters are determined at least from a photometric result for that area.
5. The method according to any one of claims 1 to 4, wherein the light sensing unit corresponding to each of the regions simultaneously performs photographing according to the photographing parameters for the region.
6. The method of any of claims 1-5, wherein the preview image is determined upon focusing or metering.
7. The method according to any one of claims 1 to 6, wherein the shooting parameter includes at least one of sensitivity, a light sensing time, and an aperture.
8. A method of training a neural network, comprising:
performing semantic segmentation on the preview image to obtain a plurality of regions and respective types of the regions;
acquiring a plurality of reference images obtained by shooting scenes shot by the preview images for a plurality of times according to a plurality of shooting parameters;
determining, for a sample region of the plurality of regions, a target photographing parameter of the sample region among the plurality of photographing parameters based on the plurality of reference images;
inputting the sample region and the type of the sample region in the preview image into a neural network to obtain a predicted shooting parameter output by the neural network; and
training the neural network based on the target shooting parameters and the predicted shooting parameters.
9. An image capturing apparatus comprising:
a semantic segmentation unit configured to perform semantic segmentation on the preview image to obtain a plurality of regions;
a parameter determination unit configured to determine a shooting parameter for each of the plurality of regions; and
an instructing unit configured to instruct, for each of the plurality of areas, a light-sensing unit corresponding to the area in the photographing apparatus to perform photographing according to the photographing parameters for the area.
10. The apparatus of claim 9, wherein the semantic segmentation unit is further configured to semantically segment the preview image to obtain the plurality of regions and respective types of the plurality of regions, and wherein the parameter determination unit is further configured to determine, for each of the plurality of regions, the photographing parameter according to the type of the region.
11. The apparatus of claim 10, wherein the determining the photographing parameters according to the type of the region comprises:
inputting the area and the type of the area in the preview image into a trained neural network to obtain the shooting parameters output by the neural network.
12. The apparatus according to any one of claims 9 to 11, wherein the parameter determination unit is further configured to determine, for each of the plurality of areas, the shooting parameter at least from a photometric result for that area.
13. The apparatus according to any one of claims 9 to 12, wherein the light sensing unit corresponding to said each area simultaneously performs photographing according to the photographing parameters for the area.
14. The apparatus of any one of claims 9-13, wherein the preview image is determined upon focusing or metering.
15. The apparatus according to any one of claims 9 to 14, wherein the shooting parameter includes at least one of sensitivity, a light sensing time, and an aperture.
16. An apparatus for training a neural network, comprising:
a semantic segmentation unit configured to perform semantic segmentation on the preview image to obtain a plurality of regions and respective types of the plurality of regions;
an acquisition unit configured to acquire a plurality of reference images obtained by shooting a scene shot by the preview image a plurality of times in accordance with a plurality of shooting parameters;
a determination unit configured to determine, for a sample region of the plurality of regions, a target photographing parameter of the sample region among the plurality of photographing parameters based on the plurality of reference images;
a neural network configured to receive the sample region and a type of the sample region in the preview image to output a predicted photographing parameter; and
a training unit configured to train the neural network based on the target photographing parameters and the predicted photographing parameters.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-8 when executed by a processor.
CN202111423960.5A 2021-11-26 2021-11-26 Image shooting method, neural network training method, device, equipment and medium Pending CN114071024A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111423960.5A CN114071024A (en) 2021-11-26 2021-11-26 Image shooting method, neural network training method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111423960.5A CN114071024A (en) 2021-11-26 2021-11-26 Image shooting method, neural network training method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114071024A true CN114071024A (en) 2022-02-18

Family

ID=80276800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111423960.5A Pending CN114071024A (en) 2021-11-26 2021-11-26 Image shooting method, neural network training method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114071024A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514897A (en) * 2022-11-18 2022-12-23 北京中科觅境智慧生态科技有限公司 Method and device for processing image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353140A (en) * 2015-11-16 2018-07-31 微软技术许可有限责任公司 Image sensor system
CN108495050A (en) * 2018-06-15 2018-09-04 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN109495689A (en) * 2018-12-29 2019-03-19 北京旷视科技有限公司 A kind of image pickup method, device, electronic equipment and storage medium
CN109712177A (en) * 2018-12-25 2019-05-03 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
JP2019129474A (en) * 2018-01-26 2019-08-01 キヤノン株式会社 Image shooting device
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium
CN110493538A (en) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
US20200068107A1 (en) * 2017-06-08 2020-02-27 Fujifilm Corporation Image capturing apparatus, control method for image capturing apparatus, and control program for image capturing apparatus
CN111669492A (en) * 2019-03-06 2020-09-15 青岛海信移动通信技术股份有限公司 Method for processing shot digital image by terminal and terminal
JP2021061546A (en) * 2019-10-08 2021-04-15 キヤノン株式会社 Imaging apparatus, control method of the same, and program
CN112911139A (en) * 2021-01-15 2021-06-04 广州富港生活智能科技有限公司 Article shooting method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353140A (en) * 2015-11-16 2018-07-31 微软技术许可有限责任公司 Image sensor system
US20200068107A1 (en) * 2017-06-08 2020-02-27 Fujifilm Corporation Image capturing apparatus, control method for image capturing apparatus, and control program for image capturing apparatus
JP2019129474A (en) * 2018-01-26 2019-08-01 キヤノン株式会社 Image shooting device
CN108495050A (en) * 2018-06-15 2018-09-04 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN109712177A (en) * 2018-12-25 2019-05-03 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109495689A (en) * 2018-12-29 2019-03-19 北京旷视科技有限公司 A kind of image pickup method, device, electronic equipment and storage medium
CN111669492A (en) * 2019-03-06 2020-09-15 青岛海信移动通信技术股份有限公司 Method for processing shot digital image by terminal and terminal
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium
CN110493538A (en) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
JP2021061546A (en) * 2019-10-08 2021-04-15 キヤノン株式会社 Imaging apparatus, control method of the same, and program
CN112911139A (en) * 2021-01-15 2021-06-04 广州富港生活智能科技有限公司 Article shooting method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514897A (en) * 2022-11-18 2022-12-23 北京中科觅境智慧生态科技有限公司 Method and device for processing image
CN115514897B (en) * 2022-11-18 2023-04-07 北京中科觅境智慧生态科技有限公司 Method and device for processing image

Similar Documents

Publication Publication Date Title
US11443438B2 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN115631418B (en) Image processing method and device and training method of nerve radiation field
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN114511758A (en) Image recognition method and device, electronic device and medium
CN114743196B (en) Text recognition method and device and neural network training method
CN114972958B (en) Key point detection method, neural network training method, device and equipment
EP3869404A2 (en) Vehicle loss assessment method executed by mobile terminal, device, mobile terminal and medium
CN112749685B (en) Video classification method, apparatus and medium
CN112967196A (en) Image restoration method and device, electronic device and medium
CN113313650A (en) Image quality enhancement method, device, equipment and medium
CN114071024A (en) Image shooting method, neural network training method, device, equipment and medium
CN115690544B (en) Multi-task learning method and device, electronic equipment and medium
CN114255177B (en) Exposure control method, device, equipment and storage medium in imaging
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN116401462A (en) Interactive data analysis method and system applied to digital sharing
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN115965939A (en) Three-dimensional target detection method and device, electronic equipment, medium and vehicle
CN115620271B (en) Image processing and model training method and device
CN115937430B (en) Method, device, equipment and medium for displaying virtual object
CN116030191B (en) Method, device, equipment and medium for displaying virtual object
CN114511694B (en) Image recognition method, device, electronic equipment and medium
CN115512131B (en) Image detection method and training method of image detection model
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN115170536B (en) Image detection method, training method and device of model
CN117274575A (en) Training method of target detection model, target detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination