WO2019233392A1 - Procédé et appareil de traitement d'image, dispositif électronique et support d'informations lisible par ordinateur - Google Patents

Procédé et appareil de traitement d'image, dispositif électronique et support d'informations lisible par ordinateur Download PDF

Info

Publication number
WO2019233392A1
WO2019233392A1 PCT/CN2019/089905 CN2019089905W WO2019233392A1 WO 2019233392 A1 WO2019233392 A1 WO 2019233392A1 CN 2019089905 W CN2019089905 W CN 2019089905W WO 2019233392 A1 WO2019233392 A1 WO 2019233392A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
label
detected
target
tag
Prior art date
Application number
PCT/CN2019/089905
Other languages
English (en)
Chinese (zh)
Inventor
刘耀勇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019233392A1 publication Critical patent/WO2019233392A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
  • the smart electronic device may classify the images according to the time when the images were taken, classify the images according to the location where the images were taken, classify the images based on the people in the images, and so on. By classifying images, it is convenient for users to view images more conveniently and quickly.
  • Embodiments of the present application provide an image processing method, apparatus, electronic device, and computer-readable storage medium, which can improve the efficiency of image processing on an image.
  • An image processing method includes:
  • An image processing device includes:
  • a first acquisition module configured to acquire an image to be detected
  • a recognition module configured to perform scene recognition on the image to be detected, and obtain a target scene label of the image to be detected
  • a detection module configured to perform target detection on the image to be detected, and obtain a target subject label of the image to be detected
  • a determining module configured to determine an image label of the image to be detected according to the target scene label and / or the target subject label
  • a processing module configured to perform image processing on the image to be detected according to the image label.
  • An electronic device includes a memory and a processor.
  • the memory stores a computer program.
  • the processor causes the processor to perform the operations of the method as described above.
  • a computer-readable storage medium has stored thereon a computer program that, when executed by a processor, implements the operations of the method described above.
  • the image processing method, device, electronic device, and computer-readable storage medium in the embodiments of the present application can determine the image tag corresponding to the image by performing scene recognition and target detection on the image, and perform image processing on the image to be detected according to the image tag. Batch processing is performed on the images to be detected corresponding to the same image tag, and different image processing is performed on the images to be detected corresponding to different image tags, which not only improves the efficiency of image processing of electronic devices, but also automatically processes the images of electronic devices. The effect is more diversified.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment.
  • FIG. 2 is a flowchart of an image processing method in another embodiment.
  • FIG. 3 is a flowchart of an image processing method in another embodiment.
  • FIG. 4 is a structural block diagram of an image processing apparatus in an embodiment.
  • FIG. 5 is a structural block diagram of an image processing apparatus in another embodiment.
  • FIG. 6 is a structural block diagram of an image processing apparatus in another embodiment.
  • FIG. 7 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • FIG. 8 is a schematic diagram of an image processing circuit in one embodiment.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment. As shown in FIG. 1, an image processing method includes:
  • the electronic device may acquire an image to be detected.
  • the image to be detected may be an image captured by the electronic device, an image stored in the electronic device, or an image downloaded by the electronic device.
  • the electronic device may perform scene recognition, target detection, and the like on the image to be detected, and determine a shooting scene corresponding to the image to be detected and a target subject captured in the image to be detected. Further, after acquiring the shooting scene corresponding to the image to be detected and the target subject captured in the image to be detected, classification processing may be performed on the image to be detected according to the shooting scene, and image processing may be performed on the target subject in the image to be detected. .
  • Operation 104 Perform scene recognition on the image to be detected, and obtain a target scene label of the image to be detected.
  • the electronic device may perform scene recognition on the image to be detected, and obtain a target scene label of the image to be detected.
  • the electronic device may use image classification technology to perform scene recognition on the image to be detected.
  • Image classification refers to a method of dividing an image or an image area into one of several categories according to the characteristics reflected by the image information.
  • the electronic device performing scene recognition according to the image classification technology includes: the electronic device may pre-store image feature information corresponding to multiple scene tags, and after acquiring the to-be-detected image, the electronic device may use the image feature of the to-be-detected image The information is respectively matched with the stored image feature information, and a scene label corresponding to the successfully matched graphic feature information is obtained as the target scene label of the image to be detected.
  • the scene tags pre-stored in the electronic device may include: landscape, beach, blue sky, green grass, snow, night, dark, backlight, sunset, fireworks, spotlight, indoor, macro, text, portrait, baby, cat, Dogs, food, etc.
  • the above scene tags may be divided into a background tag and a foreground tag, and the above background tags may include: landscape, beach, blue sky, green grass, snow scene, night scene, dark, backlight, sunset, fireworks, spotlight, indoor;
  • the above foreground tag Can include: Macro, Text, Portrait, Baby, Cat, Dog, Food, etc.
  • the electronic device when the electronic device matches the image feature information of the image to be detected with the stored image feature information, if there is only one scene tag that is successfully matched, the electronic device uses the successfully matched scene tag as the Target scene tags; if there are multiple matching scene tags, the electronic device can obtain the confidence of each scene tag, and select a scene tag as the target scene tag according to the confidence of each scene tag.
  • Operation 106 Perform target detection on the image to be detected, and obtain a target subject label of the image to be detected.
  • the electronic device may also perform target detection on the image to be detected, and identify and locate a target subject in the image to be detected.
  • the above object detection refers to a method of identifying the type of an object in an image and calibrating the position of the object in the image according to the characteristics reflected in the image information.
  • the image feature information of the image to be detected can be matched with the feature information corresponding to the stored subject tag, and the successfully matched subject tag is obtained as the target subject tag.
  • the main body tags stored in the electronic device may include: portrait, baby, cat, dog, food, text, blue sky, green grass, beach, fireworks, and the like.
  • the electronic device when the electronic device performs target detection on the image to be detected, if there is only one subject tag in the image to be detected, the above body tag is used as the target subject label; if the electronic device performs target detection on the image to be detected, if If there are multiple subject tags in the image to be detected, the electronic device may select one of the multiple subject tags as the target subject tag. Among them, the electronic device may select a target body tag having the largest area of the corresponding body region from the plurality of body tags; the electronic device may also select a corresponding body region having the highest definition of the body region from the plurality of body tags as the target body tag, etc. .
  • the electronic device may also obtain position information of the target subject area corresponding to the target subject tag, and mark the target subject area in the image to be detected. For example, the electronic device may mark a target subject region in the image to be detected with a rectangular frame.
  • Operation 108 Determine an image label of the image to be detected according to the target scene label and / or the target subject label.
  • the electronic device may select one of the target scene tag and the target subject tag as the image tag, and the electronic device may also use the target scene tag and the target subject tag as the image tag.
  • the electronic device may use the target subject label as the image label.
  • the target scene label may be used as the image label.
  • Operation 110 Perform image processing on the image to be detected according to the image label.
  • the electronic device may perform image processing on the image to be detected according to the image tag.
  • the electronic device may perform group processing, global image processing, and local image processing on the image to be detected according to the image tag.
  • grouping processing refers to grouping images to be detected according to image tags, for example, grouping images corresponding to the same image tag into a group.
  • the aforementioned global image processing refers to performing color processing, saturation processing, brightness processing, contrast processing, and other processing on the entire image.
  • the image local processing refers to performing color processing, saturation processing, brightness processing, contrast processing, and other processing on a part of an image.
  • the electronic device may obtain an image processing strategy corresponding to an image tag when performing image global processing or image local processing on an image to be detected, and perform image processing on the image to be detected according to the image processing strategy.
  • the electronic device when the image label is the same as the target scene label, the electronic device can perform the global image processing on the image to be detected; when the image label is the same as the target body label, the electronic device can find the target body area corresponding to the target body label, and The target subject region in the image to be detected is subjected to local image processing.
  • the image tag is "landscape”
  • the electronic device may increase the saturation of the image to be detected; when the image tag is "portrait”, the electronic device may perform a beauty treatment on the portrait area in the image to be detected.
  • an electronic device when an electronic device performs image processing on an image, it can often only process a single image or a part of a single image according to fixed parameters.
  • the effect on image processing is relatively simple, and the efficiency of image processing is low.
  • the image tags corresponding to the images can be determined by performing scene recognition and target detection on the images, and performing image processing on the images to be detected according to the image tags, can not only implement batch processing of the images to be detected corresponding to the same image tag, It is also possible to implement different image processing on the images to be detected corresponding to different image tags, which not only improves the efficiency of image processing of electronic equipment, but also has more diversified effects of automatic image processing by electronic equipment.
  • the electronic device may use the classification model to perform scene recognition on the image to be detected, and use the detection model to perform target detection on the image to be detected.
  • the above classification model and detection model are both deep learning models.
  • an independent classification model and detection model may be set in the electronic device, and the above classification model and detection model are run in parallel.
  • the electronic device inputs the to-be-detected image into the above classification model and detection model, respectively, the above-mentioned classification model can perform scene recognition on the to-be-detected image and output a target scene label of the to-be-detected image; the above-mentioned detection model can perform object detection on the to-be-detected image, and The target subject label of the image to be detected is output.
  • the electronic device may also reuse the above-mentioned classification model and detection model with the basic network, extract features from the to-be-detected image by multiplexing the basic network, and then send the extracted features to the classification model and the detection model, respectively.
  • the classification model can perform scene recognition based on the extracted features; the above detection model can perform target detection based on the extracted features.
  • performing scene recognition on the image to be detected and obtaining a target scene label of the image to be detected includes: performing scene recognition on the image to be detected, obtaining multiple scene labels corresponding to the image to be detected and the confidence of each scene label; according to the confidence The degree determines the target scene label.
  • the electronic device may output multiple scene labels corresponding to the image to be detected when performing scene recognition on the image to be detected; the above multiple scene labels represent corresponding to the image to be detected detected by the electronic device.
  • the electronic device may output the confidence level of each scene label.
  • the above-mentioned confidence level is a value used to indicate the credibility of the output parameter. When the above-mentioned confidence level is higher, it indicates that the confidence level of the scene label is higher.
  • the electronic device outputs multiple scene labels corresponding to the image to be detected may be: “blue sky” confidence 90%, “beach” confidence 85%, and “grassland” confidence 80%, then in the above three scene labels, " The blue sky label has the highest credibility, which means that the shooting scene corresponding to the image to be detected is the closest to "blue sky”.
  • the electronic device may determine the target scene label according to the above confidence level.
  • the electronic device may use the scene label with the highest confidence as the target scene label.
  • the target scene label has high accuracy, which is beneficial to further processing the image according to the obtained target scene label.
  • determining the image label of the image to be detected according to the target scene label and / or the target subject label includes:
  • the image to be detected may not include a target scene tag or a target subject tag.
  • the electronic device may use the target scene label as the image label, and at this time, the image label of the image to be detected is a single label. If the target scene tag is not included in the image to be detected and only includes the target subject tag, the electronic device may use the target subject tag as the image tag, and at this time, the image tag of the image to be detected is a single tag. If the target scene label is not included in the image to be detected, and only the target subject label is included, the electronic device may obtain multiple subject labels obtained by performing target detection on the image to be detected, obtain the subject area corresponding to each subject label, and the electronic device may compare each subject.
  • the area of the area determines the image label based on the area of the body area.
  • the electronic device can sort the body tags according to the area of the body area, and then select the body tags corresponding to the serial number as the image tags according to the number of output image tags. For example, if the number of image tags output by the electronic device is two, the electronic device may select a body tag with the largest body area area and the second largest body area area as the image label.
  • the electronic device may select a target subject label or a target scene label as the image label of the image to be detected, which is helpful for quickly and accurately determining the image label.
  • determining the image tag of the image to be detected according to the target scene tag and / or the target subject tag includes: if the obtained target scene tag is different from the target subject tag, obtaining a target subject region corresponding to the target subject tag and the target subject tag Detect the area ratio of the image; determine the image label based on the area ratio.
  • a tag is selected as the image tag from the target subject tag and the target scene tag.
  • the target scene label is the same as the target subject label
  • the target subject label is directly obtained as the image label. For example, when the target subject label and the target scene label are both "blue sky", the electronic device outputs an image label as "blue sky”.
  • the electronic device may obtain a target subject area corresponding to the target subject label, and determine an image tag of the image to be detected according to an area ratio of the area of the target subject area to an area of the image to be detected.
  • determining the image label of the image to be detected according to the area ratio includes: if the area ratio is lower than the first threshold, using the target scene label as the image label; if the area ratio is not lower than the first threshold, obtaining the target scene label's The first confidence level and the second confidence level of the target subject label; determine the image label according to the first confidence level and the second confidence level.
  • the electronic device may select a target scene tag as the image tag.
  • the electronic device performs a target detection on the subject area and the detection accuracy is low. Therefore, when the subject area in the image to be detected is small, the electronic device can select a target scene label obtained by image classification. As image tag.
  • the electronic device may obtain the first confidence level of the target scene label and the second confidence level of the target subject label, and according to the first confidence level, The image tag is determined by the degree of confidence and the second degree of confidence.
  • the electronic device may select a tag with a high degree of confidence among the first and second degrees of confidence as the image tag.
  • the target scene label output of the electronic device to be detected by the electronic device has a “beach” confidence of 90%
  • the target subject label has a “cat” confidence of 95%
  • the electronic device detects that the target subject area corresponding to the target subject label and the image to be detected If the area ratio is greater than 1/3, the electronic device selects the target subject tag "cat" with high confidence as the image tag of the electronic device.
  • the first threshold may be a value set by a user or a value set by an electronic device, for example, 1/2 or 1/3.
  • the image label when the target subject label obtained by the electronic device is different from the target scene label, the image label may be determined according to the area of the target subject area, and the image label output by the image to be detected is more accurate.
  • the method further includes:
  • Operation 112 Obtain a target subject region corresponding to the target subject tag.
  • Operation 114 Acquire an image processing strategy corresponding to the target subject label.
  • Operation 116 Perform corresponding image processing on the target subject area according to the image processing strategy.
  • the electronic device may perform partial image processing on the image to be detected.
  • the electronic device may identify a target subject region corresponding to the target subject label in the image to be detected.
  • the electronic device may also obtain an image processing strategy corresponding to the target subject tag, and perform image processing on the target subject area in the image to be detected according to the image processing strategy. For example, when the target subject area is "Portrait”, the electronic device may perform beauty treatment on the "Portrait”; when the target subject area is "Gourmet", the electronic device may perform saturation enhancement processing on the "Gourmet”; when the target subject is When the area is "text”, the electronic device can sharpen and calibrate the "text”.
  • the image processing strategy corresponding to the above body tag may be pre-stored in the electronic device or stored in the server.
  • the electronic device can directly obtain a pre-stored image processing strategy corresponding to the target subject tag or search the server for an image processing strategy corresponding to the target subject tag.
  • the electronic device can perform image processing on the target subject area in the image to be detected, which can not only realize local processing of the image, but also process the target subject area according to the image processing strategy, and the image processing method is more intelligent. .
  • the method further includes:
  • Operation 120 Determine a display order of each image according to the number of views.
  • the electronic device can group the images according to the image tags, and group the images corresponding to the same image tag into a group.
  • the electronic device can display the images corresponding to each group on the electronic device interface, so that users can directly view the images according to the image tags.
  • the electronic device may also count the browsing times of each image under the same image tag. The above browsing times may be the cumulative browsing times or the browsing times counted over a period of time.
  • the electronic device may determine the display order of the foregoing images according to the number of views of each image.
  • the electronic device may sequentially display the images in order of the number of browsing times from high to low; the electronic device may also sequentially display the images in order of the number of browsing times from low to high.
  • the display order of each image by the electronic device can be determined, and the method of displaying images is more intelligent.
  • an image processing method includes:
  • performing scene recognition on the image to be detected and obtaining a target scene label of the image to be detected includes: performing scene recognition on the image to be detected, obtaining multiple scene labels corresponding to the image to be detected and the confidence of each scene label; according to the confidence The degree determines the target scene label.
  • determining the image tag of the image to be detected according to the target scene tag and / or the target subject tag includes: if the target scene tag cannot be obtained, using the target subject tag as the image tag; if the target subject tag cannot be obtained, changing The target scene label is used as the image label.
  • determining the image tag of the image to be detected according to the target scene tag and / or the target subject tag includes: if the obtained target scene tag is different from the target subject tag, obtaining a target subject region corresponding to the target subject tag and the target subject tag Detect the area ratio of the image; determine the image label based on the area ratio.
  • determining the image label of the image to be detected according to the area ratio includes: if the area ratio is lower than the first threshold, using the target scene label as the image label; if the area ratio is not lower than the first threshold, obtaining the target scene label's The first confidence level and the second confidence level of the target subject label; determine the image label according to the first confidence level and the second confidence level.
  • the method further includes: acquiring a target subject region corresponding to the target subject tag; acquiring an image processing strategy corresponding to the target subject tag; and performing corresponding image processing on the target subject region according to the image processing strategy.
  • the above method further includes: obtaining the number of views of each image under the same image tag; and determining the display order of each image according to the number of views.
  • FIG. 4 is a structural block diagram of an image processing apparatus in an embodiment. As shown in FIG. 4, an image processing apparatus includes:
  • the first acquiring module 402 is configured to acquire an image to be detected.
  • the recognition module 404 is configured to perform scene recognition on an image to be detected, and obtain a target scene label of the image to be detected.
  • the detection module 406 is configured to perform target detection on the image to be detected, and obtain a target subject label of the image to be detected.
  • a determining module 408 is configured to determine an image label of an image to be detected according to a target scene label and / or a target subject label.
  • the processing module 410 is configured to perform image processing on an image to be detected according to an image tag.
  • the recognition module 404 performs scene recognition on the image to be detected, and obtains target scene tags of the image to be detected includes: scene recognition on the image to be detected, obtaining multiple scene tags corresponding to the image to be detected, and confidence of each scene tag. ; Determine the target scene label based on the confidence.
  • the determining module 408 determines the image tag of the image to be detected according to the target scene tag and / or the target subject tag, including: if the target scene tag cannot be obtained, using the target subject tag as the image tag; if the target subject cannot be obtained Label, using the target scene label as the image label.
  • the determining module 408 determines the image tag of the image to be detected according to the target scene tag and / or the target subject tag, including: if the obtained target scene tag is different from the target subject tag, obtaining a target subject corresponding to the target subject tag The area ratio of the area to the image to be detected; the image label is determined according to the area ratio.
  • the determining module 408 determines the image label of the image to be detected according to the area ratio. If the area ratio is lower than the first threshold, the target scene label is used as the image label. If the area ratio is not lower than the first threshold, the target is obtained. The first confidence level of the scene label and the second confidence level of the target subject label; the image label is determined according to the first confidence level and the second confidence level.
  • FIG. 5 is a structural block diagram of an image processing apparatus in another embodiment.
  • an image processing apparatus includes: a first acquisition module 502, an identification module 504, a detection module 506, a determination module 508, a processing module 510, and a second acquisition module 512.
  • the first acquisition module 502, the identification module 504, the detection module 506, the determination module 508, and the processing module 510 have the same functions as the corresponding modules in FIG. 4.
  • the second acquisition module 512 is configured to acquire a target subject region corresponding to the target subject tag; and acquire an image processing strategy corresponding to the target subject tag.
  • the processing module 510 is further configured to perform corresponding image processing on the target subject area according to the image processing strategy.
  • FIG. 6 is a structural block diagram of an image processing apparatus in another embodiment.
  • an image processing device includes: a first acquisition module 602, an identification module 604, a detection module 606, a determination module 608, a processing module 610, a statistics module 612, and a display module 614.
  • the first acquisition module 602, the identification module 604, the detection module 606, the determination module 608, and the processing module 610 have the same functions as the corresponding modules in FIG. 4.
  • the statistics module 612 is configured to obtain the browsing times of each image under the same image label.
  • a display module 614 is configured to determine a display order of each image according to the number of views.
  • each module in the above image processing apparatus is for illustration only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the above image processing apparatus.
  • Each module in the image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the hardware in or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • each module in the image processing apparatus provided in the embodiments of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or server.
  • the program module constituted by the computer program can be stored in the memory of the terminal or server.
  • the computer program is executed by a processor, the operations of the image processing method described in the embodiments of the present application are implemented.
  • FIG. 7 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the image processing method applicable to the electronic device provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement an image processing method provided by each of the following embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, causing the processors to execute the image processing methods described in the embodiments of the present application operating.
  • the embodiment of the present application further provides a computer program product containing instructions, which when executed on a computer, causes the computer to perform operations of the image processing method described in the embodiment of the present application.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by hardware and / or software components, and may include various processing units that define an ISP (Image Signal Processing) pipeline.
  • FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 8, for ease of description, only aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes a first ISP processor 830, a second ISP processor 840, and a control logic 850.
  • the first camera 810 includes one or more first lenses 812 and a first image sensor 814.
  • the first image sensor 814 may include a color filter array (such as a Bayer filter).
  • the first image sensor 814 may obtain light intensity and wavelength information captured by each imaging pixel of the first image sensor 814, and provide information that may be captured by the first ISP.
  • the second camera 820 includes one or more second lenses 822 and a second image sensor 824.
  • the second image sensor 824 may include a color filter array (such as a Bayer filter).
  • the second image sensor 824 may obtain light intensity and wavelength information captured by each imaging pixel of the second image sensor 824, and provide information that may be captured by the second ISP.
  • the first image collected by the first camera 810 is transmitted to the first ISP processor 830 for processing.
  • the first image statistical data (such as the brightness of the image and the contrast value of the image) can be processed. , Image color, etc.) to the control logic 850.
  • the control logic 850 can determine the control parameters of the first camera 810 according to the statistical data, so that the first camera 810 can perform operations such as autofocus and automatic exposure according to the control parameters.
  • the first image may be stored in the image memory 860 after being processed by the first ISP processor 830, and the first ISP processor 830 may also read the image stored in the image memory 860 for processing.
  • the first image may be directly sent to the display 870 for display after being processed by the ISP processor 830, and the display 870 may also read the image in the image memory 860 for display.
  • the first ISP processor 830 processes the image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 830 may perform one or more image processing operations on the image data and collect statistical information about the image data.
  • the image processing operations may be performed with the same or different bit depth accuracy.
  • the image memory 860 may be a part of a memory device, a storage device, or a separate dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
  • DMA Direct Memory Access
  • the first ISP processor 830 may perform one or more image processing operations, such as time-domain filtering.
  • the processed image data may be sent to the image memory 860 for further processing before being displayed.
  • the first ISP processor 830 receives the processing data from the image memory 860, and performs the image data processing in the RGB and YCbCr color spaces on the processing data.
  • the image data processed by the first ISP processor 830 may be output to the display 870 for viewing by a user and / or further processed by a graphics engine or a GPU (Graphics Processing Unit).
  • the output of the first ISP processor 830 may also be sent to the image memory 860, and the display 870 may read image data from the image memory 860.
  • the image memory 860 may be configured to implement one or more frame buffers.
  • the statistical data determined by the first ISP processor 830 may be sent to the control logic 850.
  • the statistical data may include statistical information of the first image sensor 814 such as automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, and first lens 812 shading correction.
  • the control logic 850 may include a processor and / or a microcontroller that executes one or more routines (such as firmware), and the one or more routines may determine the control parameters and the first parameters of the first camera 810 according to the received statistical data.
  • a control parameter of an ISP processor 830 may be sent to the control logic 850.
  • control parameters of the first camera 810 may include gain, integration time of exposure control, image stabilization parameters, flash control parameters, control parameters of the first lens 812 (for example, focal length for focusing or zooming), or a combination of these parameters.
  • the ISP control parameters may include a gain level and a color correction matrix for automatic white balance and color adjustment (eg, during RGB processing), and a first lens 812 shading correction parameter.
  • the second image collected by the second camera 820 is transmitted to the second ISP processor 840 for processing.
  • statistical data of the second image such as image brightness, image (The contrast value of the image, the color of the image, etc.) are sent to the control logic 850.
  • the control logic 850 can determine the control parameters of the second camera 820 according to the statistical data, so that the second camera 820 can perform operations such as autofocus and automatic exposure according to the control parameters .
  • the second image may be stored in the image memory 860 after being processed by the second ISP processor 840, and the second ISP processor 840 may also read the image stored in the image memory 860 for processing.
  • the second image may be directly sent to the display 870 for display after being processed by the ISP processor 840, and the display 870 may also read the image in the image memory 860 for display.
  • the second camera 820 and the second ISP processor 840 may also implement processing operations as described by the first camera 810 and the first ISP processor 830.
  • the electronic device can implement the image processing method described in the embodiment of the present application according to the image processing technology.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de traitement d'image comprenant : l'acquisition d'une image à tester ; la réalisation d'une reconnaissance de scène sur l'image à tester, de façon à acquérir une étiquette de scène cible de l'image à tester ; la réalisation d'une détection de cible sur l'image à tester, de façon à acquérir une étiquette de corps cible de l'image à tester ; la détermination d'une étiquette d'image de l'image à tester en fonction de l'étiquette de scène cible et/ou de l'étiquette de corps cible ; et la réalisation d'un traitement d'image sur l'image à tester en fonction de l'étiquette d'image.
PCT/CN2019/089905 2018-06-08 2019-06-04 Procédé et appareil de traitement d'image, dispositif électronique et support d'informations lisible par ordinateur WO2019233392A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810589446.0 2018-06-08
CN201810589446.0A CN108846351A (zh) 2018-06-08 2018-06-08 图像处理方法、装置、电子设备和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2019233392A1 true WO2019233392A1 (fr) 2019-12-12

Family

ID=64210855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089905 WO2019233392A1 (fr) 2018-06-08 2019-06-04 Procédé et appareil de traitement d'image, dispositif électronique et support d'informations lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN108846351A (fr)
WO (1) WO2019233392A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445711A (zh) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 图像检测方法、装置、电子设备和存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846351A (zh) * 2018-06-08 2018-11-20 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN109685741B (zh) * 2018-12-28 2020-12-11 北京旷视科技有限公司 一种图像处理方法、装置及计算机存储介质
CN109685746B (zh) * 2019-01-04 2021-03-05 Oppo广东移动通信有限公司 图像亮度调整方法、装置、存储介质及终端
CN109784252A (zh) * 2019-01-04 2019-05-21 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110163810B (zh) * 2019-04-08 2023-04-25 腾讯科技(深圳)有限公司 一种图像处理方法、装置以及终端
CN110490225B (zh) * 2019-07-09 2022-06-28 北京迈格威科技有限公司 基于场景的图像分类方法、装置、系统和存储介质
CN110348422B (zh) * 2019-07-18 2021-11-09 北京地平线机器人技术研发有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN110765525B (zh) * 2019-10-18 2023-11-10 Oppo广东移动通信有限公司 生成场景图片的方法、装置、电子设备及介质
CN110996153B (zh) 2019-12-06 2021-09-24 深圳创维-Rgb电子有限公司 基于场景识别的音画品质增强方法、系统和显示器
CN111027622B (zh) * 2019-12-09 2023-12-08 Oppo广东移动通信有限公司 图片标签生成方法、装置、计算机设备及存储介质
CN111625674A (zh) * 2020-06-01 2020-09-04 联想(北京)有限公司 一种图片处理方法及装置
CN113610934B (zh) * 2021-08-10 2023-06-27 平安科技(深圳)有限公司 图像亮度调整方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169944A (zh) * 2012-02-09 2014-11-26 诺基亚公司 自动通知示出共同内容的图像
CN107657051A (zh) * 2017-10-16 2018-02-02 广东欧珀移动通信有限公司 一种图片标签的生成方法、终端设备及存储介质
CN107835364A (zh) * 2017-10-30 2018-03-23 维沃移动通信有限公司 一种拍照辅助方法及移动终端
CN108846351A (zh) * 2018-06-08 2018-11-20 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617432B (zh) * 2013-11-12 2017-10-03 华为技术有限公司 一种场景识别方法及装置
CN103810504B (zh) * 2014-01-14 2017-03-22 三星电子(中国)研发中心 一种图像处理方法和装置
CN106469162A (zh) * 2015-08-18 2017-03-01 中兴通讯股份有限公司 一种图片排序方法和相应的图片存储显示设备
CN107886104A (zh) * 2016-09-30 2018-04-06 法乐第(北京)网络科技有限公司 一种图像的标注方法
CN107481327B (zh) * 2017-09-08 2019-03-15 腾讯科技(深圳)有限公司 关于增强现实场景的处理方法、装置、终端设备及系统
CN107704884B (zh) * 2017-10-16 2022-01-07 Oppo广东移动通信有限公司 图像标签处理方法、图像标签处理装置及电子终端
CN107993191B (zh) * 2017-11-30 2023-03-21 腾讯科技(深圳)有限公司 一种图像处理方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169944A (zh) * 2012-02-09 2014-11-26 诺基亚公司 自动通知示出共同内容的图像
CN107657051A (zh) * 2017-10-16 2018-02-02 广东欧珀移动通信有限公司 一种图片标签的生成方法、终端设备及存储介质
CN107835364A (zh) * 2017-10-30 2018-03-23 维沃移动通信有限公司 一种拍照辅助方法及移动终端
CN108846351A (zh) * 2018-06-08 2018-11-20 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445711A (zh) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 图像检测方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN108846351A (zh) 2018-11-20

Similar Documents

Publication Publication Date Title
WO2019233392A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support d'informations lisible par ordinateur
WO2020001197A1 (fr) Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur
WO2019233394A1 (fr) Procédé et appareil de traitement d'image, support de stockage et dispositif électronique
WO2019233393A1 (fr) Procédé et appareil de traitement d'image, support de stockage et dispositif électronique
WO2019233263A1 (fr) Procédé de traitement vidéo, dispositif électronique, et support d'enregistrement lisible par ordinateur
CN108764370B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
WO2020259179A1 (fr) Procédé de mise au point, dispositif électronique et support d'informations lisible par ordinateur
US11457138B2 (en) Method and device for image processing, method for training object detection model
US11233933B2 (en) Method and device for processing image, and mobile terminal
CN108921161B (zh) 模型训练方法、装置、电子设备和计算机可读存储介质
WO2019233266A1 (fr) Procédé de traitement d'image, support de stockage lisible par ordinateur et dispositif électronique
WO2019237887A1 (fr) Procédé de traitement d'images, dispositif électronique et support d'informations lisible par ordinateur
CN108961302B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
CN108960232A (zh) 模型训练方法、装置、电子设备和计算机可读存储介质
WO2019233271A1 (fr) Procédé de traitement d'image, support d'informations lisible par ordinateur et dispositif électronique
CN110580428A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN108848306B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN109712177B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
WO2020001196A1 (fr) Procédé de traitement d'images, dispositif électronique et support d'informations lisible par ordinateur
WO2019233260A1 (fr) Procédé et appareil d'envoi d'informations de publicité, support d'informations, et dispositif électronique
WO2019223513A1 (fr) Procédé de reconnaissance d'image, dispositif électronique et support de stockage
CN107911625A (zh) 测光方法、装置、可读存储介质和计算机设备
CN108804658A (zh) 图像处理方法和装置、存储介质、电子设备
WO2019029573A1 (fr) Procédé de floutage d'image, support d'informations lisible par ordinateur et dispositif informatique
CN109068060A (zh) 图像处理方法和装置、终端设备、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19815011

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19815011

Country of ref document: EP

Kind code of ref document: A1