CN111145194A - Processing method, processing device and electronic equipment - Google Patents

Processing method, processing device and electronic equipment Download PDF

Info

Publication number
CN111145194A
CN111145194A CN201911415251.5A CN201911415251A CN111145194A CN 111145194 A CN111145194 A CN 111145194A CN 201911415251 A CN201911415251 A CN 201911415251A CN 111145194 A CN111145194 A CN 111145194A
Authority
CN
China
Prior art keywords
image
recognized
color
identified
partition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911415251.5A
Other languages
Chinese (zh)
Inventor
彭方振
陈锋
王娜
黄卡尔
严毅强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911415251.5A priority Critical patent/CN111145194A/en
Publication of CN111145194A publication Critical patent/CN111145194A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a processing method, comprising: adjusting the environment parameters of a first environment where the object to be identified is located according to a determined adjustment strategy at least based on the color information of the object to be identified, wherein the adjustment strategy at least makes the color of the first environment where the object to be identified is located different from that of the object to be identified; and obtaining a first image of the object to be recognized to obtain identification information of the object to be recognized based on at least recognition processing of the first image. The disclosure also provides a processing device and an electronic device.

Description

Processing method, processing device and electronic equipment
Technical Field
The present disclosure relates to a processing method, a processing apparatus, and an electronic device.
Background
With the rapid development of artificial intelligence, automatic control, communication and computer technologies, unmanned stores are developing vigorously, and the application scenes of the visual identification technology of the object to be identified are more and more extensive. Among them, a technique for identifying an object to be identified based on a checkout counter method has been put to practical use. The most key factor of the identification technology of the object to be identified is to perform image segmentation on the object to be identified and then identify the segmented image.
In carrying out the disclosed concept, the inventors have discovered that there are at least the following problems in the prior art. Whether the image segmentation is accurate or not affects the identification accuracy of the object to be identified. Due to the diversity of colors of the objects to be recognized, the colors of some objects to be recognized are close to the background color of the checkout counter, and the accuracy of the object to be recognized is not high.
Disclosure of Invention
One aspect of the present disclosure provides a processing method, including: firstly, adjusting the environmental parameters of the first environment where the object to be identified is located according to a determined adjustment strategy at least based on the color information of the object to be identified, wherein the adjustment strategy at least makes the color of the first environment where the object to be identified is located different from that of the object to be identified. Then, a first image of the object to be recognized is obtained, and identification information of the object to be recognized is obtained at least based on recognition processing of the first image.
Optionally, adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment strategy based on at least the color information of the object to be identified may include: determining the color type and the distribution condition of the object to be recognized based on the color information of the object to be recognized; determining a first adjustment strategy matched with the color type and the distribution situation so as to compensate a first color for a region to be identified where the object to be identified is located based on the first adjustment strategy; and the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
Optionally, adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment strategy based on at least the color information of the object to be identified may include: acquiring attitude information of an object to be recognized in a region to be recognized where the object to be recognized is located; determining a second adjustment strategy for adjusting the environmental parameters based on the posture information and the color type and distribution condition of the object to be recognized, so as to determine compensation parameters of the area to be recognized based on the second adjustment strategy; compensating a second color and brightness matched with the second color to an object to be recognized in the area to be recognized based on at least the compensation parameter; wherein the second color is different from a color type of the object to be recognized.
Optionally, compensating the object to be recognized in the area to be recognized for the brightness matched with the second color based on at least the compensation parameter may include: determining a brightness-adjustable light source which is the same as the second color based on the attitude information; and controlling the brightness-adjustable light source to compensate the light with the brightness matched with the second color to the object to be identified based on the compensation parameter.
Optionally, the obtaining the identification information of the object to be recognized based on at least the recognition processing of the first image may include: determining a partition granularity of the first image based on at least attribute information of an object to be identified; performing image partition on the first image according to the partition granularity to remove environmental noise to obtain at least one image partition of the object to be identified; extracting vector features of the at least one image partition at least based on the attribute information, and marking the object to be recognized at least based on the vector features to obtain identification information of the object to be recognized; the number of edge pixel points of the image partitions corresponding to different partition granularities is different.
Optionally, the image partitioning the first image according to the partition granularity to remove the environmental noise, and obtaining at least one image partition of the object to be recognized may include:
obtaining the image gradients of the first image at different pixel points; determining pixel points of which the image gradient is greater than a first threshold value as edge pixel points of the first image; determining partition edge pixel points from the determined edge pixel points according to the partition granularity so as to form at least one image partition based on the partition edge pixel points; and removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be recognized to obtain the at least one image partition of the object to be recognized.
Optionally, the method further comprises: detecting that an object to be recognized is located in a region to be recognized of the first environment, and obtaining a second image of the object to be recognized so as to determine color information of the object to be recognized based on the second object.
Optionally, the method further comprises: and comparing the first image of the object to be recognized with the sample in the sample database to output prompt information for updating the sample database or identification information of the object to be recognized.
Another aspect of the disclosure provides a processing apparatus including an environmental parameter adjustment module and a first image acquisition module. The environment parameter adjusting module is used for adjusting the environment parameters of a first environment where the object to be identified is located according to a determined adjusting strategy at least based on the color information of the object to be identified, wherein the adjusting strategy at least makes the color of the first environment where the object to be identified is located different from that of the object to be identified; and the first image obtaining module is used for obtaining a first image of the object to be recognized so as to obtain the identification information of the object to be recognized at least based on the recognition processing of the first image.
Optionally, the environment parameter adjusting module includes: an object color determining submodule and a first adjusting strategy determining submodule. The object color determination submodule is used for determining the color type and the distribution condition of the object to be identified based on the color information of the object to be identified; the first adjustment strategy determination submodule is used for determining a first adjustment strategy matched with the color type and the distribution condition so as to compensate a first color for the area to be identified where the object to be identified is located based on the first adjustment strategy; and the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
Optionally, the environment parameter adjusting module includes: the system comprises a posture obtaining submodule, a first adjusting strategy determining submodule and a compensation submodule. The gesture obtaining submodule is used for obtaining gesture information of the object to be recognized in the region to be recognized where the object to be recognized is located; the first adjustment strategy determination submodule is used for determining a second adjustment strategy for adjusting the environment parameters based on the posture information and the color type and distribution condition of the object to be recognized, so as to determine compensation parameters of the area to be recognized based on the second adjustment strategy; the compensation submodule is used for compensating a second color and the brightness matched with the second color for the object to be recognized in the area to be recognized at least based on the compensation parameter; wherein the second color is different from a color type of the object to be recognized.
Optionally, the compensation submodule includes a light source determination unit and a compensation unit. The light source determining unit is used for determining a brightness-adjustable light source which is the same as the second color based on the posture information; the compensation unit is used for controlling the brightness adjustable light source to compensate the light with the brightness matched with the second color to the object to be identified based on the compensation parameter.
Optionally, the apparatus further includes an image recognition module, which includes a granularity determination sub-module, a partition sub-module, and an identification information obtaining sub-module. The granularity determining submodule is used for determining the partition granularity of the first image at least based on the attribute information of the object to be identified; the partition submodule is used for carrying out image partition on the first image according to the partition granularity so as to remove environmental noise and obtain at least one image partition of the object to be identified; the identification information obtaining submodule is used for extracting the vector characteristics of the at least one image partition at least based on the attribute information, so as to mark the object to be recognized at least based on the vector characteristics, and obtain the identification information of the object to be recognized; the number of edge pixel points of the image partitions corresponding to different partition granularities is different.
Optionally, the partitioning sub-module includes: the device comprises an image gradient obtaining unit, an edge pixel point determining unit, a partitioning unit and a screening unit. The image gradient obtaining unit is used for obtaining the image gradients of the first image at different pixel points; the edge pixel point determining unit is used for determining pixel points of which the image gradient is greater than a first threshold value as edge pixel points of the first image; the partition unit is used for determining partition edge pixel points from the determined edge pixel points according to the partition granularity so as to form at least one image partition based on the partition edge pixel points; the screening unit is used for removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be recognized to obtain the at least one image partition of the object to be recognized.
Optionally, the apparatus further includes a second image obtaining module, configured to detect that an object to be recognized is in a region to be recognized in the first environment, and obtain a second image of the object to be recognized, so as to determine color information of the object to be recognized based on the second object.
Optionally, the apparatus further includes a prompt module, where the prompt module is configured to compare the first image of the object to be recognized with the sample in the sample database, so as to output prompt information for updating the sample database or identification information of the object to be recognized.
Another aspect of the present disclosure provides an electronic device including: the image acquisition assembly is used for acquiring images; a light source assembly for providing light sources of a plurality of colors; one or more processors, computer readable storage media, for storing one or more computer programs which, when executed by the processors, implement the methods as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the processing method provided by the embodiment of the disclosure, the image of the object to be recognized is pre-collected, the color in the image of the object to be recognized is recognized, and then the color with a large contrast with the color in the image of the object to be recognized is calculated, so that the environment parameter is automatically adjusted, the image of the object to be recognized is obviously separated from the environment image, the segmentation precision of the image of the object to be recognized can be effectively improved, and the recognition precision of the object to be recognized is improved.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically shows an application scenario of a processing method, a processing apparatus and an electronic device according to an embodiment of the present disclosure;
fig. 2 schematically illustrates a system architecture suitable for a processing method, processing apparatus and electronic device according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of a processing method according to an embodiment of the present disclosure;
fig. 4 schematically shows a schematic diagram of a to-be-detected object in a first scene in the prior art;
fig. 5 schematically shows a schematic diagram of a to-be-detected object in a second scene in the prior art;
fig. 6 schematically illustrates a schematic view of an object to be detected in a first scene according to an embodiment of the present disclosure;
fig. 7 schematically illustrates a schematic view of an object to be detected in a second scene according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a schematic diagram of an edge pixel point according to an embodiment of the disclosure;
FIG. 9 schematically illustrates a schematic diagram of an edge pixel point according to another embodiment of the present disclosure;
FIG. 10 schematically illustrates a schematic view of a light source of an electronic device according to an embodiment of the disclosure;
FIG. 11 schematically illustrates a schematic view of a light source of an electronic device according to another embodiment of the present disclosure;
FIG. 12 schematically shows a flow chart of a processing method according to another embodiment of the present disclosure;
FIG. 13 schematically shows a block diagram of a processing device according to an embodiment of the disclosure; and
FIG. 14 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
Fig. 1 schematically shows an application scenario of a processing method, a processing apparatus and an electronic device according to an embodiment of the disclosure. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a description will be given taking a scenario in which settlement is performed using a settlement station as an example. The settlement table comprises a table body 1, a settlement area 2, a display device 3 and an image acquisition device 4. Of course, the checkout station may also include devices such as: the system comprises a storage detection platform, a weighing device, a payment device, a two-dimensional code, a packaging device, a code scanning device and the like. The settlement table can use the image acquisition device 4 to acquire images of the commodities on the settlement area 2, and then perform commodity identification on the commodities to determine the price and other information of the commodities, so that the settlement is convenient for users. Compared with a settlement mode that a scanning gun is used for scanning commodities, a user or a cashier is not required to scan a designated area of the commodities, information such as prices of the commodities can be recognized only by placing the commodities in the settlement area 2, and convenience in operation of the user or the cashier is facilitated to be improved.
However, a product identification technique based on the checkout station system has been put to practical use. The most critical factor of this technology is the image segmentation and then recognition of the goods. The accuracy of the image segmentation influences the accuracy of commodity identification. In actual use, due to the diversity of colors of the commodity images, the colors of some commodity images are close to the background of the settlement table, so that the image segmentation precision is low, and the commodity identification precision is reduced.
The embodiment of the disclosure provides a processing method, a processing device and electronic equipment. The method includes an environment adjustment process and an image processing process. In the environment adjusting process, adjusting the environment parameters of the first environment where the object to be identified is located according to a determined adjusting strategy at least based on the color information of the object to be identified, wherein the adjusting strategy at least makes the color of the first environment where the object to be identified is located different from that of the object to be identified. After the environment adjusting process is finished, an image processing process is carried out, a first image of the object to be recognized is obtained, and identification information of the object to be recognized is obtained at least based on recognition processing of the first image. Because the image of the object to be recognized is pre-collected, the color in the image of the object to be recognized is recognized, and then the color with large contrast with the color in the image of the object to be recognized is calculated, so that the environment parameters are automatically adjusted, the image of the object to be recognized and the image of the environment are obviously separated, the image segmentation precision is improved, and the commodity recognition precision is further improved.
Fig. 2 schematically shows a system architecture suitable for a processing method, a processing apparatus and an electronic device according to an embodiment of the present disclosure.
It should be noted that fig. 2 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 2, the system architecture 200 according to this embodiment may include terminal devices 201, 202, 203, a network 204 and a server 205. The network 204 may include a plurality of gateways, routers, hubs, network wires, etc. to provide a medium for communication links between the end devices 201, 202, 203 and the server 205. Network 204 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 201, 202 to interact with other terminal devices and the server 205 via the network 204 to receive or send information and the like, such as sending payment requests, image processing requests and receiving processing results. The terminal devices 201, 202, 203 may be installed with various applications having communication clients, such as banking applications, shopping applications, web browser applications, search applications, office applications, instant messaging tools, mailbox clients, social platform software, etc. (just examples).
The terminal devices 201, 202, including but not limited to smart phones, virtual reality devices, augmented reality devices, tablet computers, laptop computers, etc., may implement online payment functionality.
The terminal device 203 may be an electronic device having a camera and a light supplement function for a specific color, and the electronic device may identify an object to be identified by light supplement, photographing, image processing, and the like, including but not limited to a checkout counter, an object identification device, and the like.
The server 205 may receive the request and process the request. For example, the server 205 may be a back office management server, a cluster of servers, or the like. The background management server can analyze and process the received payment request, image processing request, compensation color request and the like, and feed back the processing result to the terminal equipment.
It should be noted that the processing method provided by the embodiment of the present disclosure may be generally executed by the terminal device 203 and the server 205. Accordingly, the processing device provided by the embodiment of the present disclosure may be generally disposed in the terminal device 203 or the server 205. The processing method provided by the embodiments of the present disclosure may also be performed by a terminal device, a server or a server cluster communicating with the terminal devices 201, 202, 203 and/or the server 205.
It should be understood that the number of terminal devices, networks, and servers are merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 3 schematically shows a flow chart of a processing method according to an embodiment of the disclosure.
As shown in fig. 3, the method includes operations S301 to S303.
In operation S301, an environment parameter of a first environment in which an object to be recognized is located is adjusted according to a determined adjustment policy based on at least color information of the object to be recognized, where the adjustment policy at least makes a color of the first environment in which the object to be recognized is located different from that of the object to be recognized.
In this embodiment, the color information of the object to be recognized may be color information determined by image processing, may also be color information input by a user, and may also be color information received from other electronic devices. Environmental parameters include, but are not limited to: and any one or more of color, fill light brightness, compensation color and fill light duration. The adjustment strategy includes, but is not limited to, at least one of: a fill-in light related policy, a switching background related policy, etc. The light supplement related strategy can be to supplement light for the object to be recognized and/or the ambient light by a light supplement lamp and the like. The switching background-related strategy can switch the background of the environment where the object to be recognized is located, for example, the object to be recognized is placed on a device capable of adjusting a display image (such as a display panel, a backlight source capable of performing color adjustment, and the like), and the adjustment of the environment parameters is realized by adjusting the display image. The color of the first environment where the object to be recognized is located can be different from that of the object to be recognized through the method, and the method is helpful for accurately determining the image of the object to be recognized from the image.
In one embodiment, the adjusting the environmental parameter of the first environment in which the object to be recognized is located according to the determined adjustment strategy based on at least the color information of the object to be recognized may include the following operations.
Firstly, the color type and the distribution condition of the object to be recognized are determined based on the color information of the object to be recognized. For example, an image including the object to be recognized and the environment in which the object to be recognized is located may be captured, and the image may be subjected to image processing, such as color recognition, to determine the color type and distribution of the object to be recognized. For example, the image includes four colors of red, yellow, blue and white, wherein the approximate region where the object to be recognized is located includes the four colors of red, yellow, blue and white, the edge region of the object to be recognized mainly includes blue and white, and red and yellow are located in the inner region of the object to be recognized.
Then, a first adjustment strategy matched with the color type and the distribution situation is determined, so that the first color is compensated for the area to be identified where the object to be identified is located based on the first adjustment strategy. For example, since the edge region of the object to be recognized mainly includes blue and white, and the background color of the first environment in which the object to be recognized is located is white, the background color of the first environment needs to be adjusted to a color that is largely different from the color difference between white and blue, such as red, yellow, and the like. In addition, the first adjustment strategy may further include a background color adjustment mode: for example, the background color is adjusted by supplementing light or by adjusting the display image. In addition, the first adjustment strategy may further include duration information for adjusting the background color, and the like. Specifically, the first adjustment policy may be determined by looking up a database or the like. For example, the database stores the mapping relationship between the color types and distribution conditions and the first adjustment strategy. Each strategy in the first adjustment strategy may be determined through simulation, calibration, and the like.
And the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value. For example, when the degree of recognition between a certain specific color and the type and distribution of the color of the object to be recognized exceeds 90%, the specific color may be used as the first color. Wherein, the distribution of the color is different from the first color, such as 90% of the distribution area.
The mapping between the color type and distribution and the first adjustment strategy may be as follows. And aiming at a specified article (such as a test sample, comprising a plurality of colors distributed in a plurality of areas), adjusting the environmental parameters based on the first adjustment strategy, and dividing the image of the object to be recognized according to the image obtained by photographing the various first adjustment strategies to obtain the image segmentation accuracy of the object to be recognized. And then calibrating the mapping relation between the color type and distribution condition and the first adjustment strategy based on the image segmentation accuracy. The image segmentation accuracy refers to a ratio of an area (or a number of pixels included) of an image to be identified to an area (or a number of pixels included) of an accurate image (such as an image determined by calibration) after the image of the object to be identified is automatically segmented from the image shot by the camera. The mapping relationship between the color type and distribution condition and the first adjustment strategy can be calibrated in the following manner: if the first adjustment strategy is determined based on the color type and the distribution condition, the environmental parameters of the first environment are adjusted based on the first adjustment strategy, then the image of the object to be recognized is obtained, the image segmentation of the object to be recognized is carried out, and the mapping relation between the color type and the distribution condition and the first adjustment strategy is determined based on whether the image segmentation accuracy of the object to be recognized exceeds a preset accuracy threshold (for example, an association relation is established when the image segmentation accuracy of the object to be recognized exceeds the preset accuracy threshold).
In another embodiment, the adjusting the environmental parameter of the first environment in which the object to be recognized is located according to the determined adjustment strategy based on at least the color information of the object to be recognized may include the following operations.
First, posture information of an object to be recognized in a region to be recognized where the object to be recognized is located is obtained. Wherein the attitude information may include: the standing and lying posture information, the folding posture information and the like of the object to be recognized.
Then, a second adjustment strategy for adjusting the environment parameters is determined based on the posture information and the color type and distribution condition of the object to be recognized, so that compensation parameters of the area to be recognized are determined based on the second adjustment strategy.
Then, a second color and a brightness matched with the second color are compensated for the object to be recognized in the area to be recognized at least based on the compensation parameters. Wherein the second color is different from a color type of the object to be recognized. In this embodiment, the compensation parameter may include compensation brightness in addition to the compensation color. The image sensor has the advantages that the sensitivities of the image sensor to different colors are not the same, and the brightness of the compensating light can be properly reduced for the colors with high sensitivities, so that electric energy is saved, and the service life of the light supplementing lamp is prolonged. For the color with low sensitivity, the brightness of the compensating light can be properly improved to improve the recognition effect.
Taking the example of adjusting the environmental parameter of the first environment by compensating light, compensating the object to be recognized in the area to be recognized for the brightness matching with the second color based on at least the compensation parameter may include the following operations.
First, a luminance-adjustable light source identical to the second color is determined based on the pose information. The brightness-adjustable light sources can be respectively distributed in different directions of the object to be identified. A brightness adjustable light source may emit light of one or more colors. For example, a brightness-adjustable light source is integrated with a blue light source (e.g., a blue LED) and a yellow light source (e.g., a yellow LED), and the brightness of the blue light and/or the brightness of the yellow light can be adjusted by adjusting the current applied to the blue light source and/or the yellow light source. Furthermore, when the brightness of the blue light and the brightness of the yellow light are adjusted, color adjustment can be realized (for example, blue light, green light, yellow light, white light, or the like is realized by adjusting the ratio of the blue light to the yellow light). For another example, a plurality of single-color light sources with adjustable brightness may be provided, respectively.
Then, the brightness-adjustable light source is controlled to compensate the object to be recognized for the light with the brightness matched with the second color based on the compensation parameter.
In operation S303, a first image of the object to be recognized is obtained, so as to obtain identification information of the object to be recognized based on at least recognition processing of the first image.
The identification processing of the first image may include the following operations: the commodity image is extracted from the shot first image, then the characteristic extraction is carried out on the commodity image to obtain the image characteristic, and then the image identification result is obtained based on the image characteristic. In addition, the first image and/or the commodity image may be subjected to image preprocessing.
In order to facilitate understanding of the technical solution of the present disclosure, the following describes the prior art with reference to fig. 4 to 5 as an example, and then describes the prior art with reference to fig. 6 to 7 as an example based on the same scenario.
Fig. 4 schematically shows a schematic diagram of a to-be-detected object in a first scene in the prior art.
As shown in fig. 4, the embodiment of the checkout station of fig. 1 will be described as an example. As shown in the upper diagram of fig. 4, the commodity placed on the settlement area 2 includes color areas 51 and 52. The color area 51 is the same or similar to the settlement area 2, and the colors of the two areas are not easily distinguished from each other in the captured image. In the conventional art, when the product is image-recognized, since the colors of the color area 51 and the settlement area 2 are not easily distinguished, it is likely that the obtained product image is an area as shown in fig. 52 shown below fig. 4, and the image area 51 may be erroneously determined as the settlement area 2. Therefore, the result of the image recognition may be inaccurate due to the above misjudgment.
Fig. 5 schematically shows a schematic diagram of a to-be-detected object in a second scene in the prior art.
As shown in fig. 5, the embodiment of the checkout station of fig. 1 will be described as an example. As shown in the upper diagram of fig. 5, the commodity placed on the settlement area 2 includes color areas 51 and 52. The color area 52 is the same or similar to the settlement area 2, and the colors of the two areas are not easily distinguished from each other in the captured image. In the conventional art, when the product is image-recognized, since the colors of the color area 52 and the settlement area 2 are not easily distinguished, it is likely that the obtained product image is an area as shown in fig. 51 of fig. 4, and the image area 52 may be erroneously determined as the settlement area 2. Therefore, the result of the image recognition may be inaccurate due to the above misjudgment.
Fig. 6 schematically illustrates a schematic diagram of an object to be detected in a first scene according to an embodiment of the disclosure.
As shown in fig. 6, the upper diagram of fig. 6 is the scene shown in fig. 4, when it is detected that the image includes two colors, a complementary color complementary to the two colors can be found, and then the environment is supplemented with light, so that the result shown in the lower diagram of fig. 6 can be obtained. At this time, the difference between the color of the settlement area 2 and the colors of the color area 51 and the color area 52 on the commodity is relatively large, so that the recognition accuracy of the commodity image can be effectively improved, and the accuracy of the commodity information based on the commodity image recognition can be further improved.
Fig. 7 schematically shows a schematic diagram of an object to be detected in a second scene according to an embodiment of the present disclosure.
As shown in fig. 7, the upper diagram of fig. 7 is the scene shown in fig. 5, when it is detected that the image includes two colors, a complementary color complementary to the two colors can be found, and then the environment is supplemented with light, so that the result shown in the lower diagram of fig. 7 can be obtained. At this time, the difference between the color of the settlement area 2 and the colors of the color area 51 and the color area 52 on the commodity is relatively large, so that the recognition accuracy of the commodity image can be effectively improved, and the accuracy of the commodity information based on the commodity image recognition can be further improved.
In another embodiment, the obtaining the identification information of the object to be recognized based on at least the recognition processing of the first image may include the following operations.
First, a partition granularity of the first image is determined based on at least attribute information of an object to be recognized. The attribute information of the object to be identified includes but is not limited to at least one of the following: the type of the article, the configuration attribute input according to the requirement of the user on the article preprocessing, and the like, and the attribute information of the object to be identified can be a default attribute or configuration information. Such as the name of the good, quantity, price, total price, manufacturer, model, etc.
Then, image partitioning is carried out on the first image according to the partition granularity to remove environmental noise, and at least one image partition of the object to be recognized is obtained. For example, an image of a product of one or more products is determined from the first image (e.g., allowing simultaneous identification of multiple products), and images other than the image of the product in the first image are removed as ambient noise, i.e., the background image may be removed.
And then, extracting the vector characteristics of the at least one image partition at least based on the attribute information, and marking the object to be recognized at least based on the vector characteristics to obtain the identification information of the object to be recognized. The vector feature extraction process of the image partition can be the same as that of the prior art. For example, the vector features may include vectors for at least one of: edges, corners, regions, ridges, colors, textures, shapes, spatial relationships, and the like.
The number of edge pixel points of the image partitions corresponding to different partition granularities is different.
Specifically, the image partitioning of the first image according to the partition granularity to remove the environmental noise, and obtaining at least one image partition of the object to be recognized may include the following operations.
Firstly, obtaining the image gradients of the first image at different pixel points. For example, the image gradient may be a gray value or a pixel value. The embodiment can realize image partition by utilizing a gray image or an original color image in an edge detection mode, and further obtain an image of an object to be identified. Among them, the purpose of the edge detection method is to detect points in an image where parameters such as luminance change significantly. Significant changes in image attributes typically reflect significant events and changes in the attributes. Such as discontinuities in depth, surface orientation discontinuities, material property changes, scene lighting changes, and the like. By edge detection, the data volume can be greatly reduced, irrelevant information can be eliminated, and important structural attributes of the image are reserved.
Then, determining the pixel points of which the image gradient is greater than a first threshold value as edge pixel points of the first image. In particular, edge detection may be performed based on a look-up and zero-crossing based approach. Where the search-based approach detects the boundary by finding the maximum and minimum values in the first derivative of the image, the boundary is typically positioned in the direction where the gradient is largest. Zero crossing based methods find boundaries by finding zero crossings of the second derivative of the image, typically Laplacian zero crossings or zero crossings represented by nonlinear differences. The first threshold may be determined by calibration or the like.
And then, according to the partition granularity, partition edge pixel points are determined from the determined edge pixel points, so that at least one image partition is formed based on the partition edge pixel points.
Fig. 8 schematically shows a schematic diagram of an edge pixel point according to an embodiment of the present disclosure. Fig. 9 schematically shows a schematic diagram of an edge pixel point according to another embodiment of the present disclosure.
As shown in fig. 8 and fig. 9, they are schematic diagrams of pixel points of the same pattern under different partition granularities. And when the partition granularities are different, the number of the determined edge pixel points is different. In general, the smaller the partition granularity is, the more edge pixel points are, and the closer the obtained edge is to the true edge of the image.
Then, removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be recognized, and obtaining the at least one image partition of the object to be recognized.
The brightness adjustable light source is exemplarily described below with reference to fig. 10 and 11.
Fig. 10 schematically illustrates a schematic view of a light source of an electronic device according to an embodiment of the disclosure.
As shown in fig. 10, a plurality of brightness adjustable light sources 6 may be respectively disposed in one or more directions above the stage body 1 and/or the settlement area 2. The plurality of brightness adjustable light sources 6 may be monochromatic light sources or color adjustable light sources. The plurality of brightness adjustable light sources 6 may be divided into a plurality of groups, and respectively arranged in a plurality of directions, and each group may include a plurality of light sources of the same or different colors.
Fig. 11 schematically illustrates a schematic view of a light source of an electronic device according to another embodiment of the present disclosure.
As shown in fig. 11, the settlement area 2 may be a transparent or translucent area. A plurality of brightness-adjustable light sources 6 may be respectively disposed below the settlement area 2. The plurality of brightness adjustable light sources 6 may be monochromatic light sources or color adjustable light sources. The plurality of brightness adjustable light sources 6 may be divided into a plurality of groups, and respectively arranged in a plurality of directions, and each group may include a plurality of light sources of the same or different colors.
The function of adjusting the environmental parameters can be simply, conveniently and rapidly realized through the above mode, and the method is low in cost and convenient to popularize.
FIG. 12 schematically shows a flow chart of a processing method according to another embodiment of the disclosure.
As shown in fig. 12, the method may further include operation S1201.
Detecting that an object to be recognized is located in a region to be recognized of the first environment, and obtaining a second image of the object to be recognized so as to determine color information of the object to be recognized based on the second object.
For example, the area to be recognized may be the settlement area 2, and when it is detected that an article is placed on the settlement area 2, photographing may be performed to obtain a second image of the object to be recognized and determine color information of the object to be recognized. Wherein the presence or absence of an item on the settlement area 2 can be determined by an image sensor, an electronic scale, or the like. Alternatively, a manually input signal, such as clicking a set button, may be used to determine whether or not items are present in settlement area 2.
It should be noted that, for a camera arranged at a fixed position, the viewing range of the camera can be used as the area to be identified. Further, when determining the color information of the object to be recognized, all colors included in the area to be recognized may be taken as the color information of the object to be recognized. The color information of the object to be identified can be determined simply and quickly by the method.
In another embodiment, the method may further include operation S1203.
In operation S1203, the first image of the object to be recognized is compared with the sample in the sample database to output prompt information for updating the sample database or identification information of the object to be recognized.
In this embodiment, a sample most similar to the image may be determined from the sample database by an image similarity comparison method, and then the identification information of the object to be recognized is determined based on the identification information of the sample. The sample database may include a mapping relationship between the sample and the identification information.
In addition, the settling station is taken as an example, and the posture of the article placed on the settling station may be various, such as a placement position, a placement angle, a bending degree, a folding degree, and the like. Therefore, a plurality of images for one commodity can exist, and when a sample database is constructed, only a limited number of photos in postures can be associated with corresponding identification information for one sample. However, if the number of photographs of the sample in different postures can be increased, it is helpful to improve the recognition accuracy of the object to be detected. Therefore, the sample database can be updated based on the first image of the object to be recognized, the updating operation can be performed automatically, or the sample database can be updated after the first image of the object to be recognized is sent to an auditor for auditing. This makes the accuracy of the recognition of the object to be recognized based on the sample data higher as the number of times the sample data is used increases.
According to the processing method provided by the embodiment of the disclosure, the environmental parameters (such as background color and brightness of the settlement table) are adjusted in a hardware dimming mode, for example, the settlement table is optimized to be a transparent glass table, colored lamps are added in the settlement table, and the environmental parameters are adjusted by controlling the color of light and the illumination intensity. Specifically, the image including the commodity is collected in advance, the color in the image is recognized, then the proper background color is calculated according to a specific algorithm, the environmental parameters are automatically adjusted, for example, the background light color and the illumination intensity are adjusted, the commodity image and the background of the settlement table are obviously separated, the image segmentation precision is improved, and therefore the commodity recognition accuracy is improved.
Fig. 13 schematically shows a block diagram of a processing device according to an embodiment of the disclosure.
As shown in fig. 13, the electronic device 1300 may include an environmental parameter adjustment module 1310 and a first image acquisition module 1330.
The environment parameter adjusting module 1310 is configured to adjust, according to a determined adjustment policy, an environment parameter of a first environment in which the object to be recognized is located based on at least color information of the object to be recognized, where the adjustment policy at least makes a color of the first environment in which the object to be recognized is located different from that of the object to be recognized.
The first image obtaining module 1330 is configured to obtain a first image of the object to be recognized, so as to obtain identification information of the object to be recognized based on at least a recognition process of the first image.
In one embodiment, the environment parameter adjustment module 1310 may include: an object color determining submodule and a first adjusting strategy determining submodule.
The object color determination submodule is used for determining the color type and the distribution condition of the object to be identified based on the color information of the object to be identified.
And the first adjustment strategy determination sub-module is used for determining a first adjustment strategy matched with the color type and the distribution condition.
The first adjustment strategy compensates a first color for a region to be identified where the object to be identified is located. And the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
In another embodiment, the environment parameter adjustment module may include: the system comprises a posture obtaining submodule, a first adjusting strategy determining submodule and a compensation submodule.
The gesture obtaining submodule is used for obtaining gesture information of the object to be recognized in the area to be recognized where the object to be recognized is located.
The first adjustment strategy determination submodule is used for determining a second adjustment strategy for adjusting the environment parameters based on the posture information and the color type and distribution condition of the object to be recognized, so that compensation parameters of the area to be recognized are determined based on the second adjustment strategy.
The compensation submodule is used for compensating a second color and the brightness matched with the second color for the object to be recognized in the area to be recognized at least based on the compensation parameter. Wherein the second color is different from a color type of the object to be recognized.
Specifically, the compensation submodule may include a light source determination unit and a compensation unit.
Wherein the light source determination unit is configured to determine a brightness-adjustable light source that is the same as the second color based on the pose information.
The compensation unit is used for controlling the brightness adjustable light source to compensate the light with the brightness matched with the second color to the object to be identified based on the compensation parameter.
In another embodiment, the apparatus 1300 may further include an image recognition module, which may include a granularity determination sub-module, a partition sub-module, and an identification information obtaining sub-module.
The granularity determination submodule is used for determining the partition granularity of the first image at least based on the attribute information of the object to be identified.
And the partitioning submodule is used for carrying out image partitioning on the first image according to the partitioning granularity so as to remove environmental noise and obtain at least one image partition of the object to be identified.
The identification information obtaining submodule is used for extracting the vector characteristics of the at least one image partition at least based on the attribute information, so that the object to be recognized is marked at least based on the vector characteristics, and the identification information of the object to be recognized is obtained. The number of edge pixel points of the image partitions corresponding to different partition granularities is different.
For example, the partitioning sub-module may include: the device comprises an image gradient obtaining unit, an edge pixel point determining unit, a partitioning unit and a screening unit.
The image gradient obtaining unit is used for obtaining the image gradients of the first image at different pixel points.
The edge pixel point determining unit is used for determining the pixel points of which the image gradient is greater than a first threshold value as the edge pixel points of the first image.
The partition unit is used for determining partition edge pixel points from the determined edge pixel points according to the partition granularity so as to form at least one image partition based on the partition edge pixel points.
The screening unit is used for removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be recognized to obtain the at least one image partition of the object to be recognized.
In another embodiment, the apparatus 1300 may further include: and a second image obtaining module. The second image obtaining module is used for detecting that an object to be identified is located in a region to be identified of the first environment, obtaining a second image of the object to be identified, and determining color information of the object to be identified based on the second object.
In another embodiment, the apparatus 1300 may further include a prompt module. The prompt module is used for comparing the first image of the object to be recognized with the sample in the sample database so as to output prompt information for updating the sample database or identification information of the object to be recognized.
Operations performed by the modules, sub-modules and units included in the apparatus 1300 can be referred to the above description, and are not repeated here.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any of the environmental parameter adjustment module 1310 and the first image obtaining module 1330 may be combined and implemented in one module, or any of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the environment parameter adjusting module 1310 and the first image obtaining module 1330 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware. Alternatively, at least one of the environment parameter adjustment module 1310 and the first image acquisition module 1330 may be implemented at least in part as a computer program module, which when executed, may perform corresponding functions.
FIG. 14 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 14, the electronic device 1400 includes: one or more processors 1410, computer-readable storage media 1420, image acquisition assembly 1430, and light source assembly 1440. The image acquisition component 1430 is used to acquire images. The light source assembly 1440 is used to provide light sources of various colors. The electronic device may perform a method according to an embodiment of the present disclosure.
In particular, processor 1410 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip sets and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 1410 may also include onboard memory for caching purposes. Processor 1410 may be a single processing unit or multiple processing units for performing different actions of a method flow according to an embodiment of the disclosure.
Computer-readable storage medium 1420, for example, can be a non-volatile computer-readable storage medium, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); memory such as Random Access Memory (RAM) or flash memory, etc.
The computer-readable storage medium 1420 may include a program 1421, which program 1421 may include code/computer-executable instructions that, when executed by the processor 1410, cause the processor 1410 to perform a method according to an embodiment of the disclosure or any variation thereof.
The program 1421 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in program 1421 may include one or more program modules, including for example program module 1421A, program modules 1421B, … …. It should be noted that the division and number of the program modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, and when the program modules are executed by the processor 1410, the processor 1410 may execute the method according to the embodiment of the present disclosure or any variation thereof.
In accordance with embodiments of the present disclosure, the processor 1410 may interact with the computer-readable storage medium 1420 to perform a method in accordance with embodiments of the present disclosure, or any variant thereof.
According to an embodiment of the present disclosure, at least one of the environment parameter adjustment module 1310 and the first image obtaining module 1330 may be implemented as a program module described with reference to fig. 14, which, when executed by the processor 1410, may implement the corresponding operations described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (10)

1. A method of processing, comprising:
adjusting the environment parameters of a first environment where the object to be identified is located according to a determined adjustment strategy at least based on the color information of the object to be identified, wherein the adjustment strategy at least makes the color of the first environment where the object to be identified is located different from that of the object to be identified; and
and obtaining a first image of the object to be recognized so as to obtain the identification information of the object to be recognized at least based on the recognition processing of the first image.
2. The method of claim 1, wherein the adjusting, according to the determined adjustment strategy, the environmental parameter of the first environment in which the object to be identified is located based on at least color information of the object to be identified comprises:
determining the color type and the distribution condition of the object to be recognized based on the color information of the object to be recognized;
determining a first adjustment strategy matched with the color type and the distribution situation so as to compensate a first color for a region to be identified where the object to be identified is located based on the first adjustment strategy;
and the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
3. The method of claim 1, wherein the adjusting, according to the determined adjustment strategy, the environmental parameter of the first environment in which the object to be identified is located based on at least color information of the object to be identified comprises:
acquiring attitude information of an object to be recognized in a region to be recognized where the object to be recognized is located;
determining a second adjustment strategy for adjusting the environmental parameters based on the posture information and the color type and distribution condition of the object to be recognized, so as to determine compensation parameters of the area to be recognized based on the second adjustment strategy;
compensating a second color and brightness matched with the second color to an object to be recognized in the area to be recognized based on at least the compensation parameter;
wherein the second color is different from a color type of the object to be recognized.
4. The method of claim 3, wherein compensating the object to be recognized in the area to be recognized for the brightness matching the second color based at least on the compensation parameter comprises:
determining a brightness-adjustable light source which is the same as the second color based on the attitude information;
and controlling the brightness-adjustable light source to compensate the light with the brightness matched with the second color to the object to be identified based on the compensation parameter.
5. The method of claim 1, wherein the deriving identification information of the object to be recognized based on at least a recognition process of the first image comprises:
determining a partition granularity of the first image based on at least attribute information of an object to be identified;
performing image partition on the first image according to the partition granularity to remove environmental noise to obtain at least one image partition of the object to be identified;
extracting vector features of the at least one image partition at least based on the attribute information, and marking the object to be recognized at least based on the vector features to obtain identification information of the object to be recognized;
the number of edge pixel points of the image partitions corresponding to different partition granularities is different.
6. The method of claim 5, wherein the image-partitioning the first image according to the partition granularity to remove ambient noise to obtain at least one image partition of the object to be identified comprises:
obtaining the image gradients of the first image at different pixel points;
determining pixel points of which the image gradient is greater than a first threshold value as edge pixel points of the first image;
determining partition edge pixel points from the determined edge pixel points according to the partition granularity so as to form at least one image partition based on the partition edge pixel points;
and removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be recognized to obtain the at least one image partition of the object to be recognized.
7. The method of claim 1, further comprising:
detecting that an object to be recognized is located in a region to be recognized of the first environment, and obtaining a second image of the object to be recognized so as to determine color information of the object to be recognized based on the second object.
8. The method of any of claims 1 to 7, further comprising:
and comparing the first image of the object to be recognized with the sample in the sample database to output prompt information for updating the sample database or identification information of the object to be recognized.
9. A processing apparatus, comprising:
the environment parameter adjusting module is used for adjusting the environment parameters of a first environment where the object to be identified is located according to a determined adjusting strategy at least based on the color information of the object to be identified, wherein the adjusting strategy at least makes the color of the first environment where the object to be identified is located different from that of the object to be identified; and
a first image obtaining module, configured to obtain a first image of the object to be recognized, so as to obtain identification information of the object to be recognized based on at least recognition processing on the first image.
10. An electronic device, comprising:
the image acquisition assembly is used for acquiring images;
a light source assembly for providing light sources of a plurality of colors;
one or more processors; and
a computer readable storage medium storing one or more computer programs which, when executed by the processor, implement the method of any of claims 1-7.
CN201911415251.5A 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment Pending CN111145194A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911415251.5A CN111145194A (en) 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911415251.5A CN111145194A (en) 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111145194A true CN111145194A (en) 2020-05-12

Family

ID=70522743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911415251.5A Pending CN111145194A (en) 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111145194A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640179A (en) * 2020-06-26 2020-09-08 百度在线网络技术(北京)有限公司 Display method, device and equipment of pet model and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551014A (en) * 2015-11-27 2016-05-04 江南大学 Image sequence change detection method based on belief propagation algorithm with time-space joint information
US20170223326A1 (en) * 2016-01-28 2017-08-03 International Business Machines Corporation Automated color adjustment of media files
CN107609514A (en) * 2017-09-12 2018-01-19 广东欧珀移动通信有限公司 Face identification method and Related product
CN107729099A (en) * 2017-09-25 2018-02-23 联想(北京)有限公司 Background method of adjustment and its system
CN108038889A (en) * 2017-11-10 2018-05-15 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image color cast
CN110490852A (en) * 2019-08-13 2019-11-22 腾讯科技(深圳)有限公司 Search method, device, computer-readable medium and the electronic equipment of target object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551014A (en) * 2015-11-27 2016-05-04 江南大学 Image sequence change detection method based on belief propagation algorithm with time-space joint information
US20170223326A1 (en) * 2016-01-28 2017-08-03 International Business Machines Corporation Automated color adjustment of media files
CN107609514A (en) * 2017-09-12 2018-01-19 广东欧珀移动通信有限公司 Face identification method and Related product
CN107729099A (en) * 2017-09-25 2018-02-23 联想(北京)有限公司 Background method of adjustment and its system
CN108038889A (en) * 2017-11-10 2018-05-15 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image color cast
CN110490852A (en) * 2019-08-13 2019-11-22 腾讯科技(深圳)有限公司 Search method, device, computer-readable medium and the electronic equipment of target object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640179A (en) * 2020-06-26 2020-09-08 百度在线网络技术(北京)有限公司 Display method, device and equipment of pet model and storage medium
CN111640179B (en) * 2020-06-26 2023-09-01 百度在线网络技术(北京)有限公司 Display method, device, equipment and storage medium of pet model

Similar Documents

Publication Publication Date Title
CN110544258B (en) Image segmentation method and device, electronic equipment and storage medium
US9740967B2 (en) Method and apparatus of determining air quality
US11265481B1 (en) Aligning and blending image data from multiple image sensors
CN106993112A (en) Background-blurring method and device and electronic installation based on the depth of field
US20190132584A1 (en) Method and device for calibration
CN113673305A (en) Image marking using geodesic features
CN105678318B (en) The matching process and device of traffic sign
US11354885B1 (en) Image data and simulated illumination maps
US20220114396A1 (en) Methods, apparatuses, electronic devices and storage media for controlling image acquisition
CN107682685A (en) White balancing treatment method and device, electronic installation and computer-readable recording medium
CN112149348A (en) Simulation space model training data generation method based on unmanned container scene
CN112750162A (en) Target identification positioning method and device
Xiao et al. Accurate extrinsic calibration between monocular camera and sparse 3D lidar points without markers
Wang et al. Combining semantic scene priors and haze removal for single image depth estimation
CN116157867A (en) Neural network analysis of LFA test strips
Li et al. Color constancy using achromatic surface
Lee et al. A taxonomy of color constancy and invariance algorithm
US11854233B1 (en) Detecting overexposure in images from sunlight
CN111145194A (en) Processing method, processing device and electronic equipment
Drew Robust specularity detection from a single multi-illuminant color image
CN115410173B (en) Multi-mode fused high-precision map element identification method, device, equipment and medium
EP3671410A1 (en) Method and device to control a virtual reality display unit
US10650246B1 (en) System for determining a camera radiance
CN113065521B (en) Object identification method, device, equipment and medium
CN110910379B (en) Incomplete detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination