WO2020119588A1 - 拍摄方法及设备 - Google Patents
拍摄方法及设备 Download PDFInfo
- Publication number
- WO2020119588A1 WO2020119588A1 PCT/CN2019/123500 CN2019123500W WO2020119588A1 WO 2020119588 A1 WO2020119588 A1 WO 2020119588A1 CN 2019123500 W CN2019123500 W CN 2019123500W WO 2020119588 A1 WO2020119588 A1 WO 2020119588A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- real
- user
- score
- server device
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00204—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
- H04N1/00209—Transmitting or receiving image data, e.g. facsimile data, via a computer, e.g. using e-mail, a computer network, the internet, I-fax
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00204—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
- H04N1/00244—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server with a server, e.g. an internet server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/617—Upgrading or updating of programs or applications for camera control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0077—Types of the still picture apparatus
- H04N2201/0084—Digital still camera
Definitions
- This application relates to the field of information technology, in particular to a shooting method and equipment.
- One of the purposes of this application is to provide a method and device for binding camera equipment.
- some embodiments of the present application provide a shooting method, which includes:
- Some embodiments of the present application also provide a shooting device, which includes a memory for storing computer program instructions and a processor for executing computer program instructions, wherein, when the computer program instructions are executed by the processor, Trigger the device to execute the shooting method.
- some embodiments of the present application also provide a computer-readable medium on which computer program instructions are stored, and the computer-readable instructions may be executed by a processor to implement the shooting method.
- a first image is acquired through a camera module, and then a second image conforming to a composition pattern is acquired according to the first image, and then the shooting parameters applicable to the second image are determined, and The second image is taken based on the shooting parameters.
- the specific content of the first image can be used as the processing basis in various scenes, so that the user can obtain the second image that conforms to the composition mode, and automatically determine the appropriate shooting parameters for the user to shoot, so the user can be satisfied Demand in multiple shooting scenarios.
- FIG. 1 is a processing flowchart of a shooting method provided by an embodiment of the present application
- FIG. 2 is a schematic diagram of a display effect of a composition prompt message in an embodiment of the present application
- FIG. 3 is a processing flowchart of another shooting method provided by an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a shooting device provided by an embodiment of the present application.
- the terminal and the service network equipment include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- the memory may include non-permanent memory, random access memory (RAM) and/or non-volatile memory in computer-readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
- RAM random access memory
- ROM read only memory
- flash RAM flash random access memory
- Computer-readable media includes permanent and non-permanent, removable and non-removable media, and information storage can be implemented by any method or technology.
- the information may be computer readable instructions, data structures, modules of programs, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, read-only compact disc (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cartridge Magnetic tape, magnetic tape disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory or other
- the embodiments of the present application provide a shooting method.
- the solution can use the specific content of the first image acquired by the camera module as a processing basis in various scenarios, so that the user can acquire the second image conforming to the composition mode, and then automatically Determine appropriate shooting parameters for the user and shoot to meet the user's needs in a variety of shooting scenarios.
- the main body of execution of the method may include, but is not limited to, various types of user equipment with shooting functions, such as cameras, mobile phones, tablet computers, and digital cameras.
- the user equipment may also be a device with network connection capabilities.
- the server device includes but is not limited to, such as a network host, a single network server, multiple network server sets, or a cloud computing-based computer set, etc.
- the cloud is composed of a large number of hosts based on cloud computing (Cloud Computing) Or a network server, where cloud computing is distributed computing can be a virtual computer composed of a group of loosely coupled computer sets.
- a photographing method provided by an embodiment of the present application first obtains a first image through a camera module, and after acquiring the first image, obtains a second image conforming to a composition pattern according to the first image, and then determines that it is suitable for the first image Shooting parameters of two images, and shooting the second image based on the shooting parameters.
- the camera module at least includes a lens, an optical sensor, and corresponding circuits and other components, which are used to obtain an image of a scene to complete shooting, and the first image is the current framing content of the camera module.
- the cameras of devices such as mobile phones and tablets belong to the camera module.
- the shooting app Application, application
- the camera module When the user uses the mobile phone to shoot, open the shooting app (Application, application) to start the camera, and then point the camera at the scene to be photographed. Obtain the first image about the scene through the camera.
- the second image refers to an image conforming to the composition mode obtained on the basis of the first image
- the camera module may be adjusted to change the current viewfinder content, and the first image may be changed to the second image.
- This method requires the user to adjust the camera module, so the user can be informed how to make the adjustment by giving a prompt message.
- the first image may be processed on the basis of the first image, such as intercepting a part of the content or transforming part of the image content, etc. In this way, the first image may be processed by the processing module of the user equipment Get processed.
- FIG. 1 shows a processing flow of a shooting method provided by an embodiment of the present application.
- the first method is used to obtain a second image, which includes the following processing steps:
- Step S101 Acquire a first image through a camera module, and determine composition prompt information applicable to the first image.
- the composition prompt information may be various information for prompting the user how to compose a composition, for example, various auxiliary lines or some text prompt information.
- Step S102 Add the composition prompt information to the display area of the first image, so that the user can adjust the camera module according to the composition prompt information.
- the display area of the first image may be an area used to display the first image acquired by the camera module in the user device executing the shooting method, for example, when the user uses a mobile phone to shoot, the scene captured by the camera will be captured in the shooting app Real-time display in the display area for users to view the current viewfinder content.
- the composition prompt information may be two auxiliary lines each horizontally and vertically, dividing the display area into 9 equal parts to provide users with how to compose the image in the form of a nine-square grid, as shown in FIG. 2, in which 201-204 are auxiliary Lines, 205 and 206 are scenes in the picture.
- the user can adjust the camera module according to the composition prompt information to change the framing content, for example, some specific scenes can be placed at the position of the auxiliary line, etc. Better shooting results.
- you can also add text prompt information on this basis for example, add the above two horizontal and vertical auxiliary lines to the display area, so that photography beginners can also use the auxiliary lines of various composition prompts to create composition creation. .
- Step S103 After the user completes the adjustment, obtain a second image through the camera module. During the user's adjustment, the framing content of the camera module will continue to change. When the user completes the adjustment according to the composition prompt information, the framing content of the camera module will change from the initial first image to the second image The second image is the image to be taken after the user completes the composition.
- Step S104 Determine shooting parameters suitable for the second image, and shoot the second image based on the shooting parameters.
- the shooting parameters refer to related parameters that can affect the shooting effect, such as parameters such as shutter and aperture, and automatically determine the shooting parameters suitable for the second image according to the content of the scene contained in the second image to enable final shooting
- the obtained image has better shooting effect.
- the scene information and the composition mode may be first determined according to the first image, where the scene information refers to the first image
- the current shooting scene represented by the scene or environment for example, when most of the content in the first image is a close-up of a person, the current scene information may be considered as a portrait scene, and the scene information may also be night scenes, landscapes, and so on.
- the composition mode refers to a mode for arranging the positions of various scenes in an image when shooting.
- commonly-used composition modes include Jiugongge and Golden Section.
- the scene information and composition modes listed in this embodiment are only examples. Other existing or future scene information and composition modes that may appear in the future should also be applicable to the present invention. It is included in the protection scope of the present invention, and is included here by reference.
- the scene information and composition mode When determining the scene information and composition mode according to the first image, based on the method of deep learning, a sufficient number of sample images can be collected and obtained in advance, and these sample images are marked with scene information and composition mode, and then a recognition model is trained based on these sample images Therefore, the recognition model can be used to recognize scene information and composition patterns corresponding to the first image. Therefore, when determining the scene information and composition mode according to the first image, the scene information and composition mode may be determined according to the first image and the recognition model, wherein the recognition model is composed of the scene information and composition mode Sample images are obtained by training.
- one recognition model can be used to recognize the scene information and composition pattern of an image at the same time; or two recognition models can also be used to recognize the scene information and composition pattern separately.
- the model needs to be trained with its own set of sample images.
- the process of model training and recognition can be completed in the user equipment or in the server device; or the model training part is completed in the server device, and the recognition part is completed in the user device ,
- the user equipment may update the latest recognition model from the server equipment according to a predetermined strategy.
- the user equipment when determining the scene information and the composition mode according to the first image, may send the first image to the server device, so that the server device according to the first The image and the recognition model determine the scene information and composition mode of the first image, and send the determined scene information and composition mode to the user equipment.
- the user equipment receives the scene information and composition pattern of the first image sent by the server device, thereby obtaining required information for subsequent processing.
- the computing power of the server device can be used to improve the accuracy and efficiency of processing, while reducing the processing load on the user device side and reducing the requirements on the processing capacity of the user device.
- the data interaction between the user equipment and the server device can use various networks, such as wifi network, mobile data network, Bluetooth network, etc.
- the first image will generally Use a higher resolution image format. Therefore, when the user equipment uses the network to send the first image to the server device, it often needs to occupy a large bandwidth resource, especially when using a mobile data network, it also consumes a large amount of traffic.
- the present application provides another embodiment, in which the user equipment may first compress the first image when sending the first image to the server device, and then send the compressed first image to the server device, In order for the server device to determine the scene information and composition mode of the first image according to the compressed first image and the recognition model. Since the compressed first image can still express the scenes contained in the image, it basically does not affect the recognition result, so compressing the image before sending can reduce bandwidth consumption and reduce traffic consumption.
- the composition prompt information applicable to the first image may be determined according to the scene information and the composition mode. For example, when the determined scene information is a night scene and the determined composition mode is a nine-square grid mode, the composition prompt information can be determined as auxiliary lines and corresponding text prompts at a specific position, thereby allowing the user to adjust a specific scene to It is located at a specific position in the picture, such as making the street lamp in the picture coincide with one of the auxiliary lines so that it is at a third of the position in the picture.
- the shooting method further includes: acquiring the user through the camera module
- the real-time image during the adjustment process determines and displays the score of the real-time image to the user to assist the user in completing the adjustment.
- the real-time image is a series of images captured by the camera module during the adjustment process of the user and includes a change from the first image to the second image.
- the score of the real-time image is high, the real-time image can be regarded as The shooting effect is better, so users can use the score to assist the adjustment process.
- the user has displayed on the screen the composition prompt information determined based on the first image, as shown in FIG. 2. Based on this, the user will adjust the camera of the mobile phone according to the auxiliary lines 201 to 204 in FIG. 2 to change the framing content, and the framing content that changes in the process is the real-time image.
- the framing content that changes in the process is the real-time image.
- the processing load not all real-time images can be processed, but several frames can be selected for processing.
- the selected rules can be preset, for example, it can be based on user input, that is, when the user clicks or enters a specific gesture, the current real-time image is scored; or based on the state of the device used by the user, for example, according to the phone’s Gyroscope information to judge the current real-time image when the mobile range of the mobile phone is less than the preset value or still; or it can also score the current real-time image based on a preset time interval, such as every 1 second .
- the score of the real-time image can be displayed in the display area of the real-time image, so that the user can quickly know the current score of the real-time image, so as to determine whether further adjustment is needed, for example, displaying in one corner of the display area.
- the method of deep learning can also be used, that is, a sufficient number of sample images are collected in advance, and these sample images have been manually marked with scores, and then model training is performed based on these sample images to obtain Fractional regression model.
- the score regression model can be used to identify the score of the real-time image, that is, the score of the real-time image can be obtained by inputting the real-time image. Therefore, when determining the score of the real-time image, the score of the real-time image can be calculated according to the real-time image and the score regression model, wherein the score regression model is obtained by training the sample image with the marked score.
- the training of the score regression model and the scoring process can be completed in the user equipment or the server device; or, the model training part is completed in the server device, and the scoring part is in the user device Finished in, the user equipment can update the latest score regression model from the server equipment according to a predetermined strategy.
- the server device may collect image samples in advance to train to obtain a score regression model.
- the user device may send the real-time image to the server device, and the server device may use the real-time image and
- the score regression model calculates the score of the real-time image and returns it to the user device.
- the user equipment receives the score of the real-time image sent by the server device, thereby determining the score of the real-time image and displaying it in the display area.
- the user equipment may adopt a similar method to the first image, that is, compress the real-time image, and send the compressed real-time image to the server-side device, so that the server-side device is compressed according to
- the post-real-time image and score regression model calculate the score of the real-time image to achieve the effect of reducing bandwidth consumption and traffic consumption.
- a more refined method can be adopted when training the score return model.
- a score regression model corresponding to a preset area is obtained by training on a sample image about each preset area, where the preset area is an area divided based on geographic location, such as various scenic spots, etc.
- the sample image may be a photo taken in the preset area. Due to the different scenery of each scenic spot, the scoring criteria will also be different. Therefore, the score regression model trained by the sample image of each scenic spot is The images of scenic spots can give more accurate ratings.
- the preset area to which the real-time image is obtained can also be determined according to the positioning information obtained by acquiring the real-time image.
- the score of the real-time image can be calculated according to the score regression model corresponding to the real-time image and the preset area, thereby improving the accuracy of the score and providing users with more accurate reference information.
- the user when the user completes the adjustment and the second image is acquired through the camera module, it may be first determined whether the user has completed the adjustment.
- the specific judgment method can be completed by using the gyroscope built in the user equipment, for example, acquiring the gyroscope information, if the user equipment has not been moved within the preset duration or the movement amplitude is lower than the preset value through the gyroscope information, it can be determined The adjustment has been completed, thereby determining whether the user has completed the adjustment based on the gyroscope information.
- automatic focusing is performed, and the focused second image is obtained through the camera module. Therefore, in this embodiment, the second image is the framing content acquired by the camera module after autofocus has been completed.
- the focus area of the second image may be identified, the brightness of the focus area of the second image and the brightness of the global area may be determined, and then the focus of the second image may be determined.
- the brightness of the area and the brightness of the global area determine the shooting parameters applicable to the second image.
- the sample image with the marked shooting parameters can be obtained in advance by means of deep learning, Recognize the focus area of the sample image, determine the focus area of the sample image, then count the brightness of the focus area and the brightness of the global area, and then train the model through the sample image of the determined focus area brightness, global area brightness and shooting parameters to obtain Parameter statistical model.
- the parameter statistical model can be used to take the brightness of the focus area of the second image and the brightness of the global area as inputs to obtain shooting parameters suitable for the second image.
- the training of the parameter statistical model and the determination of shooting parameters may be completed in the user equipment or the server equipment.
- the model training part is completed in the server device, and the shooting parameter determination part is completed in the user device, and the user device may update the latest parameter statistical model from the server device according to a predetermined strategy.
- the server device pre-acquires the sample image marked with the shooting parameters, recognizes the focus area of the sample image, determines the focus area of the sample image, then counts the brightness of the focus area and the brightness of the global area, and then passes the determined Sample images of the focal area brightness, global area brightness and shooting parameters are trained to obtain the parameter statistical model.
- the user equipment needs to determine the shooting parameters applicable to the second image, it can first identify the focus area of the second image, determine the brightness of the focus area and the brightness of the global area, and then send the second image to the server device
- the brightness of the focus area and the brightness of the global area are determined by the server device based on the parameter statistical model obtained by training, and the shooting parameters suitable for the second image are determined and returned to the user device.
- the positioning information before determining the composition prompt information applicable to the first image according to the first image currently acquired by the camera module, the positioning information may be obtained first, and the judgment may be made according to the positioning information Whether it is in the preset area, and when in the preset area, present the recommended image belonging to the preset area to the user.
- the preset area may be preset scenic spots. When the positioning information points to a certain scenic spot, it may be considered that the user is currently taking photos in the scenic spot, and the recommended image belonging to the preset area may be about The photos of the scenic spot can be used to provide users with reference for taking photos of users.
- the processing of positioning and recommendation can be completed by the server device, and the user device can send location information to the server device, and obtain the recommended image belonging to the preset area from the server device, and show the user the recommendation image.
- the server device may collect images belonging to each preset area in advance, so as to provide the user with recommended images.
- An embodiment of the present application also provides a shooting assistance system adopting the aforementioned shooting method.
- the system is composed of two parts, including a server and a client.
- the server is the aforementioned server device, and the client is It is the aforementioned user equipment.
- the server is used to implement the following functions:
- the server is used to collect and store high-quality shooting samples as image samples for model training. These image samples may contain image data, shooting parameters, GPS information, device model, shooting time and other information.
- the secondary attributes of the image sample are marked, including scene information, scoring, composition mode, etc.
- a parameter statistical model for giving shooting parameters is trained, which can be stored in the server and used to determine the shooting parameters based on the brightness information uploaded by the client.
- a recognition model for recognizing scene information is trained, and the recognition model is sent to the client for recognizing scene information on the client.
- a fractional regression model is trained, and the fractional regression model is stored in the server and used to score the image uploaded by the client.
- a recognition model of the composition pattern is trained, and the recognition model is sent to the client to realize the recognition of the composition pattern on the client.
- the client is used to implement the following functions:
- the server determines the scenic area to which it belongs based on the GPS information, and recommends to the client excellent works (ie, recommended images) of the scenic area where the current location is.
- the user can refer to the recommended works to perform framing by himself, thereby obtaining the image through the camera module of the client, and the client locally recognizes the recognition scene and composition model of the image through the identification code model, and then gives composition hints such as auxiliary lines information.
- the user adjusts the framing content according to the auxiliary lines, during which real-time images are generated, the client periodically uploads reduced real-time images to the server, so that the server scores it based on the score regression model, and then returns to the client, which is displayed on the client’s screen for User reference.
- the client reads the built-in gyroscope information, and when the user stops moving, it is determined that the adjustment has been completed, and autofocus is performed.
- the client periodically uploads the reduced real-time image to the server, so that the server scores it based on the score regression model, and then returns to the client, which is displayed on the client’s screen for User reference.
- the client reads the built-in gyroscope information, when the user stops moving, it is determined that the adjustment has been completed, and the autofocus is performed.
- FIG. 3 shows a photographing method provided by another embodiment of the present application.
- the method adopts the foregoing second method to obtain a second image, and includes the following processing steps:
- Step S301 Obtain the first image through the camera module.
- Step S302 Determine a composition pattern suitable for the first image according to the first image.
- the aforementioned recognition model may be used to complete the recognition in the user equipment or the server device, thereby determining the composition mode suitable for the first image.
- Step S303 According to the composition pattern, determine a second image that matches the composition pattern in the first image.
- the second image in this embodiment is not obtained by the user adjusting the camera module, but is obtained by performing image processing on the basis of the first image.
- various image processing methods may be used, such as cropping and stitching the first image.
- the composition pattern applicable to the first image is a nine-square grid composition pattern
- part of the image content on the left side of the first image does not conform to the composition pattern, so when determining the second image, you can The first image is cropped, and the part of the image content on the left is removed, thereby obtaining a second image determined to conform to the composition mode.
- Step S304 Determine the shooting parameters applicable to the second image, and shoot the second image based on the shooting parameters.
- the specific processing procedure of this step is similar to that in the foregoing embodiment, and will not be repeated here.
- an embodiment of the present application also provides a shooting device.
- the method used by the shooting device for shooting is the shooting method in the foregoing embodiment, and the principle of solving the problem is similar to this method.
- the shooting device includes a memory for storing computer program instructions and a processor for executing computer program instructions, wherein when the computer program instructions are executed by the processor, the device is triggered to perform the aforementioned shooting method.
- FIG. 4 shows a structure of a shooting device suitable for implementing the method and/or technical solution in the embodiments of the present application.
- the camera device 400 includes a central processing unit (CPU, Central Processing Unit) 401, which can be stored according to The program in the read-only memory (ROM, Read Only) 402 or the program loaded in the random access memory (RAM, Random Access Memory) 403 from the storage section 408 performs various appropriate actions and processes. In RAM 403, various programs and data required for system operation are also stored.
- the CPU 401, ROM 402, and RAM 403 are connected to each other through a bus 404.
- An input/output (I/O, Input/Output) interface 405 is also connected to the bus 404.
- the following components are connected to the I/O interface 405: the input part 406 including the camera module, etc.; including such as cathode ray tube (CRT, Cathedral Ray), liquid crystal display (LCD, Liquid Crystal Display), LED display, OLED display, etc. and The output section 407 of the speaker, etc.; the storage section 408 including one or more computer-readable media such as a hard disk, optical disk, magnetic disk, semiconductor memory, etc.; and a network interface card such as a LAN (Local Area Network) card, modem, etc.
- Communication section 409 performs communication processing via a network such as the Internet.
- embodiments of the present application may be implemented as computer software programs.
- embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
- CPU central processing unit
- the above-mentioned functions defined in the method of the present application are executed.
- the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
- the computer readable medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- the computer-readable medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- the computer-readable signal medium may include a data signal that is propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried.
- This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device. .
- the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the foregoing.
- the computer program code for performing the operations of the present application may be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages-such as Java, Smalltalk, C++, as well as conventional Procedural programming language-such as "C" language or similar programming language.
- the program code may be executed entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, through an Internet service provider Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider Internet connection for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks represented in succession may actually be executed in parallel, and they may sometimes be executed in reverse order, depending on the functions involved.
- each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with a dedicated hardware-specific system that performs specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
- the present application also provides a computer-readable medium.
- the computer-readable medium may be included in the device described in the foregoing embodiments; or it may exist alone without being assembled into the device.
- the above computer-readable medium carries one or more computer-readable instructions, which can be executed by a processor to implement the methods and/or technical solutions of the foregoing multiple embodiments of the present application.
- the first image is acquired by the camera module, and the composition prompt information applicable to the first image is determined, and then displayed in the display area of the first image Adding the composition prompt information to enable the user to adjust the camera module according to the composition prompt information, and after the user completes the adjustment, obtain a second image through the camera module, and then determine to apply to the second Shooting parameters of the image, and shooting the second image based on the shooting parameters.
- the user can be provided with appropriate composition prompt information according to the specific content of the first image, prompting the user how to adjust, and at the same time automatically determine the appropriate shooting parameters for the user after the user adjustment is completed, so the user can be satisfied Demand in multiple shooting scenarios.
- the present application may be implemented in software and/or a combination of software and hardware, for example, it may be implemented using an application specific integrated circuit (ASIC), a general purpose computer, or any other similar hardware device.
- ASIC application specific integrated circuit
- the software program of the present application may be executed by a processor to implement the above steps or functions.
- the software programs of the present application can be stored in computer-readable recording media, such as RAM memory, magnetic or optical drives or floppy disks, and similar devices.
- some steps or functions of the present application may be implemented by hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (18)
- 一种拍摄方法,其中,该方法包括:通过摄像模组获取第一图像;根据所述第一图像获取符合构图模式的第二图像;确定适用于所述第二图像的拍摄参数,并基于所述拍摄参数拍摄所述第二图像。
- 根据权利要求1所述的方法,其中,根据所述第一图像获取符合构图模式的第二图像,包括:根据所述第一图像,确定适用于所述第一图像的构图模式;根据所述构图模式,在所述第一图像中选取符合构图模式的第二图像。
- 根据权利要求1所述的方法,其中,根据所述第一图像获取符合构图模式的第二图像,包括:确定适用于所述第一图像的构图提示信息;在所述第一图像的显示区域中添加所述构图提示信息,以使用户根据所述构图提示信息对所述摄像模组进行调整,以使调整获得的第二图像符合构图模式;在用户完成调整后,通过所述摄像模组获取第二图像;确定适用于所述第二图像的拍摄参数,并基于所述拍摄参数拍摄所述第二图像。
- 根据权利要求3所述的方法,其中,确定适用于所述第一图像的构图提示信息,包括:根据所述第一图像确定场景信息和构图模式;根据所述场景信息和构图模式,确定适用于所述第一图像的构图提示信息。
- 根据权利要求3所述的方法,其中,根据所述第一图像确定场景信息和构图模式,包括:根据所述第一图像和识别模型,确定场景信息和构图模式,其中,所述识别模型由已标注场景信息和构图模式的样本图像训练获得;或向服务端设备发送所述第一图像,并接收所述服务端设备发送的所述第一图像的场景信息和构图模式,其中,所述服务端设备根据所述第一图像和识别模型确 定所述第一图像的场景信息和构图模式,所述识别模型由已标注场景信息和构图模式的样本图像训练获得。
- 根据权利要求5所述的方法,其中,向服务端设备发送所述第一图像,包括:压缩所述第一图像,并向服务端设备发送压缩后的第一图像,以使所述服务端设备根据压缩后的第一图像和识别模型确定所述第一图像的场景信息和构图模式。
- 根据权利要求3所述的方法,其中,该方法还包括:获取通过摄像模组获取用户调整过程中的实时图像;确定并向用户显示所述实时图像的评分,以辅助用户完成调整。
- 根据权利要求7所述的方法,其中,确定所述实时图像的评分,包括:根据所述实时图像和分数回归模型,计算所述实时图像的评分,其中,所述分数回归模型由已标注评分的样本图像训练获得;或向服务端设备发送所述实时图像,并接收所述服务端设备发送的所述实时图像的评分,其中,所述服务端设备根据所述实时图像和分数回归模型计算所述实时图像的评分,所述分数回归模型由已标注评分的样本图像训练获得。
- 根据权利要求8所述的方法,其中,向服务端设备发送所述实时图像,包括:压缩所述实时图像,并向服务端设备发送压缩后的实时图像,以使所述服务端设备根据压缩后的实时图像和分数回归模型计算所述实时图像的评分。
- 根据权利要求8或9所述的方法,其中,该方法还包括:根据获取实时图像时所获得的定位信息,确定所属的预设区域;根据所述实时图像和分数回归模型计算所述实时图像的评分,包括:根据所述实时图像和所述预设区域对应的分数回归模型计算所述实时图像的评分,其中,所述分数回归模型由已标注评分的、关于所述预设区域的样本图像训练获得。
- 根据权利要求3所述的方法,其中,在用户完成调整后,通过所述摄像模组获取第二图像,包括:判断用户是否完成调整;若完成调整,则进行自动对焦,通过所述摄像模组获取对焦后的第二图像。
- 根据权利要求11所述的方法,其中,判断用户是否完成调整,包括:获取陀螺仪信息,根据所述陀螺仪信息判断用户是否完成调整。
- 根据权利要求1至3中任一项所述的方法,其中,确定适用于所述第二图像的拍摄参数,包括:识别所述第二图像的对焦区域;确定所述第二图像的对焦区域的亮度和全局区域的亮度;根据所述第二图像的对焦区域的亮度和全局区域的亮度,确定适用于所述第二图像的拍摄参数。
- 根据权利要求13所述的方法,其中,根据所述第二图像的对焦区域的亮度和全局区域的亮度,确定适用于所述第二图像的拍摄参数,包括:根据所述第二图像的对焦区域的亮度、全局区域的亮度以及参数统计模型,确定适用于所述第二图像的拍摄参数,其中,所述参数统计模型由已标注对焦区域的亮度、全局区域的亮度和拍摄参数的样本图像训练获得;或向服务端设备发送第二图像的对焦区域的亮度和全局区域的亮度,并接收所述服务端设备发送的、适用于所述第二图像的拍摄参数,其中,所述服务端设备根据所述第二图像的对焦区域的亮度、全局区域的亮度以及参数统计模型,确定适用于所述第二图像的拍摄参数,所述参数统计模型由已标注对焦区域的亮度、全局区域的亮度和拍摄参数的样本图像训练获得。
- 根据权利要求1至3中任一项所述的方法,其中,在根据摄像模组当前获取到的第一图像,还包括:获取定位信息;根据所述定位信息判断是否处于预设区域,并在处于预设区域时,向用户展示属于该预设区域的推荐图像。
- 根据权利要求15所述的方法,其中,根据所述定位信息判断是否处于预设区域,并在处于预设区域时,向用户展示属于该预设区域的推荐图像,包括:向服务端设备发送位置信息,从服务端设备获取属于该预设区域的推荐图像,并向用户展示所述推荐图像。
- 一种拍摄设备,其中,该设备包括用于存储计算机程序指令的存储器和用于执行计算机程序指令的处理器,其中,当该计算机程序指令被该处理器执行时, 触发所述设备执行权利要求1至16中任一项所述的方法。
- 一种计算机可读介质,其上存储有计算机程序指令,所述计算机可读指令可被处理器执行以实现如权利要求1至16中任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/344,961 US20210306559A1 (en) | 2018-12-11 | 2021-06-11 | Photographing methods and devices |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811513708.1 | 2018-12-11 | ||
CN201811513708.1A CN109495686B (zh) | 2018-12-11 | 2018-12-11 | 拍摄方法及设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/344,961 Continuation US20210306559A1 (en) | 2018-12-11 | 2021-06-11 | Photographing methods and devices |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020119588A1 true WO2020119588A1 (zh) | 2020-06-18 |
Family
ID=65709823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/123500 WO2020119588A1 (zh) | 2018-12-11 | 2019-12-06 | 拍摄方法及设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20210306559A1 (zh) |
CN (1) | CN109495686B (zh) |
WO (1) | WO2020119588A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011328A (zh) * | 2021-03-19 | 2021-06-22 | 北京百度网讯科技有限公司 | 图像处理方法、装置、电子设备及存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109495686B (zh) * | 2018-12-11 | 2020-09-18 | 上海掌门科技有限公司 | 拍摄方法及设备 |
CN111277760B (zh) * | 2020-02-28 | 2022-02-01 | Oppo广东移动通信有限公司 | 一种拍摄构图方法及终端、存储介质 |
CN111327824B (zh) * | 2020-03-02 | 2022-04-22 | Oppo广东移动通信有限公司 | 拍摄参数的选择方法、装置、存储介质及电子设备 |
CN112351201B (zh) * | 2020-10-26 | 2023-11-07 | 北京字跳网络技术有限公司 | 多媒体数据处理方法、系统、装置、电子设备和存储介质 |
CN113824874A (zh) * | 2021-08-05 | 2021-12-21 | 宇龙计算机通信科技(深圳)有限公司 | 辅助摄像方法、装置、电子设备及存储介质 |
CN113724131A (zh) * | 2021-09-02 | 2021-11-30 | 北京有竹居网络技术有限公司 | 信息处理方法、装置和电子设备 |
CN114580521B (zh) * | 2022-02-28 | 2023-04-07 | 中国科学院软件研究所 | 一种知识与数据共同驱动的人像构图指引方法及装置 |
CN117688195A (zh) * | 2022-08-30 | 2024-03-12 | 华为技术有限公司 | 一种图片推荐方法及电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5831670A (en) * | 1993-03-31 | 1998-11-03 | Nikon Corporation | Camera capable of issuing composition information |
CN106210513A (zh) * | 2016-06-30 | 2016-12-07 | 维沃移动通信有限公司 | 一种基于移动终端的拍照预览方法及移动终端 |
CN106357804A (zh) * | 2016-10-31 | 2017-01-25 | 北京小米移动软件有限公司 | 图像处理方法、电子设备及云端服务器 |
CN107317962A (zh) * | 2017-05-12 | 2017-11-03 | 广东网金控股股份有限公司 | 一种智能拍照裁剪构图系统及使用方法 |
CN107566529A (zh) * | 2017-10-18 | 2018-01-09 | 维沃移动通信有限公司 | 一种拍照方法、移动终端及云端服务器 |
CN108833784A (zh) * | 2018-06-26 | 2018-11-16 | Oppo(重庆)智能科技有限公司 | 一种自适应构图方法、移动终端及计算机可读存储介质 |
CN109495686A (zh) * | 2018-12-11 | 2019-03-19 | 上海掌门科技有限公司 | 拍摄方法及设备 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2993890A4 (en) * | 2013-04-30 | 2016-09-28 | Sony Corp | CLIENT DEVICE, DISPLAY CONTROL PROCEDURE, PROGRAM AND SYSTEM |
CN104301613B (zh) * | 2014-10-16 | 2016-03-02 | 深圳市中兴移动通信有限公司 | 移动终端及其拍摄方法 |
US10002415B2 (en) * | 2016-04-12 | 2018-06-19 | Adobe Systems Incorporated | Utilizing deep learning for rating aesthetics of digital images |
-
2018
- 2018-12-11 CN CN201811513708.1A patent/CN109495686B/zh active Active
-
2019
- 2019-12-06 WO PCT/CN2019/123500 patent/WO2020119588A1/zh active Application Filing
-
2021
- 2021-06-11 US US17/344,961 patent/US20210306559A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5831670A (en) * | 1993-03-31 | 1998-11-03 | Nikon Corporation | Camera capable of issuing composition information |
CN106210513A (zh) * | 2016-06-30 | 2016-12-07 | 维沃移动通信有限公司 | 一种基于移动终端的拍照预览方法及移动终端 |
CN106357804A (zh) * | 2016-10-31 | 2017-01-25 | 北京小米移动软件有限公司 | 图像处理方法、电子设备及云端服务器 |
CN107317962A (zh) * | 2017-05-12 | 2017-11-03 | 广东网金控股股份有限公司 | 一种智能拍照裁剪构图系统及使用方法 |
CN107566529A (zh) * | 2017-10-18 | 2018-01-09 | 维沃移动通信有限公司 | 一种拍照方法、移动终端及云端服务器 |
CN108833784A (zh) * | 2018-06-26 | 2018-11-16 | Oppo(重庆)智能科技有限公司 | 一种自适应构图方法、移动终端及计算机可读存储介质 |
CN109495686A (zh) * | 2018-12-11 | 2019-03-19 | 上海掌门科技有限公司 | 拍摄方法及设备 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011328A (zh) * | 2021-03-19 | 2021-06-22 | 北京百度网讯科技有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN113011328B (zh) * | 2021-03-19 | 2024-02-27 | 北京百度网讯科技有限公司 | 图像处理方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20210306559A1 (en) | 2021-09-30 |
CN109495686A (zh) | 2019-03-19 |
CN109495686B (zh) | 2020-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020119588A1 (zh) | 拍摄方法及设备 | |
US7805066B2 (en) | System for guided photography based on image capturing device rendered user recommendations according to embodiments | |
WO2020057198A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
WO2017101293A1 (zh) | 多媒体照片生成方法、装置、设备及手机 | |
WO2016123893A1 (zh) | 拍照的方法和装置以及终端 | |
CN109934931B (zh) | 采集图像、建立目标物体识别模型的方法及装置 | |
CN104917959A (zh) | 一种拍照方法及终端 | |
EP3105921A1 (en) | Photo composition and position guidance in an imaging device | |
WO2014154003A1 (zh) | 一种自拍图像的展现方法及装置 | |
CN101093348A (zh) | 便携式终端中全景摄影的装置和方法 | |
WO2016011860A1 (zh) | 移动终端的拍摄方法及移动终端 | |
WO2014169582A1 (zh) | 配置参数发送方法、接收方法及装置 | |
CN108419009A (zh) | 图像清晰度增强方法和装置 | |
JP2014146989A (ja) | 撮像装置、撮像方法および撮像プログラム | |
WO2016192467A1 (zh) | 一种播放视频的方法及装置 | |
WO2023174009A1 (zh) | 基于虚拟现实的拍摄处理方法、装置及电子设备 | |
KR20170011876A (ko) | 영상 처리 장치 및 그 동작 방법 | |
GB2553659A (en) | A System for creating an audio-visual recording of an event | |
WO2018028720A1 (zh) | 拍摄方法以及拍摄装置 | |
WO2019041158A1 (zh) | 拍照设备拍摄优化控制方法、装置及计算机处理设备 | |
CN111885296B (zh) | 可视化数据的动态处理方法和电子设备 | |
CN111654620B (zh) | 拍摄方法及装置 | |
CN106611440B (zh) | 一种提取实景图的方法及装置 | |
WO2021237592A1 (zh) | 锚点信息处理方法、装置、设备及存储介质 | |
WO2018137393A1 (zh) | 一种图像处理方法及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19894618 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19894618 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: OTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.12.21) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19894618 Country of ref document: EP Kind code of ref document: A1 |