CN113705544B - Automobile interior cleaning method and device, electronic equipment and storage medium - Google Patents

Automobile interior cleaning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113705544B
CN113705544B CN202111259091.7A CN202111259091A CN113705544B CN 113705544 B CN113705544 B CN 113705544B CN 202111259091 A CN202111259091 A CN 202111259091A CN 113705544 B CN113705544 B CN 113705544B
Authority
CN
China
Prior art keywords
region
image
halo
area
cleaning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111259091.7A
Other languages
Chinese (zh)
Other versions
CN113705544A (en
Inventor
钟泽邦
张校志
范朝龙
刘家骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111259091.7A priority Critical patent/CN113705544B/en
Publication of CN113705544A publication Critical patent/CN113705544A/en
Application granted granted Critical
Publication of CN113705544B publication Critical patent/CN113705544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60SSERVICING, CLEANING, REPAIRING, SUPPORTING, LIFTING, OR MANOEUVRING OF VEHICLES, NOT OTHERWISE PROVIDED FOR
    • B60S3/00Vehicle cleaning apparatus not integral with vehicles
    • B60S3/008Vehicle cleaning apparatus not integral with vehicles for interiors of land vehicles

Abstract

The application belongs to the technical field of cleaning, and discloses an automobile interior cleaning method, an automobile interior cleaning device, electronic equipment and a storage medium, wherein an automobile interior image acquires category information of each area, then candidate material information is determined according to the category information of each area, then the material type of each area is determined from corresponding candidate materials according to a white light reflection image of each area, and then cleaning operation of a corresponding cleaning mode is executed on each area according to the material type of each area; compared with the traditional material identification mode adopting spectra, the automobile interior cleaning method does not need a spectrometer and a monochromatic light source when identifying the material, so that the cost is saved; meanwhile, the candidate material information of each region is determined firstly, and then the reflected light result is comprehensively analyzed, so that the dependence on the attribute of the reflected light is reduced, and the robustness is better; the material of each part of the automobile interior trim can be accurately identified, so that the corresponding cleaning mode is determined, and the automation degree is high.

Description

Automobile interior cleaning method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of cleaning, in particular to an automobile interior cleaning method and device, electronic equipment and a storage medium.
Background
The existing automatic cleaning method for the automobile is generally specific to the automobile exterior trim, and the automobile exterior trim mainly comprises materials such as metal, plastic and glass, and can be cleaned by the same cleaning mode, so that the automatic cleaning is easy to realize. For automotive interiors, including leather, fabric, plastic and like materials, which are difficult to clean by the same cleaning means, for example, leather and plastic can be cleaned by wiping, while fabric cannot be cleaned by wiping, plastic can be wiped by water, and leather needs to be wiped by a special cleaning solution.
The interior of an automobile can be automatically cleaned by using a mechanical arm, but in order to realize the automation of cleaning, the mechanical arm is required to effectively identify the material of each part of the interior, so that corresponding cleaning modes can be executed according to different materials. The traditional material identification method mainly determines the material by utilizing the surface texture and the spectrum of the material, but because the environment of the automotive interior is complex, and special treatment is sometimes carried out on the surface of the material (such as seat weaving, fur turning, leather punching and the like), the material is difficult to be effectively identified by adopting the texture identification, and the situation that the surfaces of different materials generate the same spectrum can occur, therefore, the traditional material identification method is difficult to effectively identify the material of the automotive interior, so that when the automotive interior is automatically cleaned by utilizing a mechanical arm, the material of each part needs to be manually confirmed, and the automation degree is low.
Disclosure of Invention
The application aims to provide an automobile interior cleaning method, an automobile interior cleaning device, electronic equipment and a storage medium, which can accurately identify the material of each part of an automobile interior, so that a corresponding cleaning mode is determined, and the automation degree is high.
In a first aspect, the present application provides a method for cleaning an interior of an automobile, which is applied to a robot arm to clean the interior of the automobile; the mechanical arm comprises a camera and a white light irradiation lamp;
the automobile interior cleaning method comprises the following steps:
A1. acquiring an automotive interior image acquired by the camera;
A2. segmenting the automobile interior image by utilizing a pre-trained deep learning network to acquire the category information of each region;
A3. acquiring candidate material information of each region according to the category information of each region;
A4. illuminating each of the regions using the white light illumination lamp and acquiring a reflected image of the white light illuminated area of each of the regions collected by the camera;
A5. determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region;
A6. determining a cleaning mode of each region according to the material type of each region;
A7. and performing corresponding cleaning operation on each area according to the cleaning mode of each area.
The automotive interior cleaning method comprises the steps of determining candidate material information according to the category information of each area, determining the material type of each area from the corresponding candidate material according to the white light reflection image of each area, and executing cleaning operation of a corresponding cleaning mode on each area according to the material type of each area; compared with the traditional material identification mode adopting spectra, the automobile interior cleaning method does not need a spectrometer and a monochromatic light source when identifying the material, so that the cost is saved; meanwhile, the candidate material information of each region is determined firstly, and then the reflected light result is comprehensively analyzed, so that the dependence on the attribute of the reflected light is reduced, and the robustness is better; the material of each part of the automobile interior trim can be accurately identified, so that the corresponding cleaning mode is determined, and the automation degree is high.
Preferably, after the step A1 and before the step A4, the method further comprises the steps of:
A8. and sending a light-off command to turn off the ambient lighting.
The interference of ambient light can be reduced, and the accuracy of material identification is improved, so that the correctness of cleaning modes of all areas is ensured, and the damage to interior decorations caused by incorrect cleaning modes is avoided.
Preferably, step a5 includes:
A501. acquiring the halo size, halo brightness and image brightness of the reflection image of each region;
A502. and determining the material type of each region from the corresponding candidate material according to the candidate material information of each region and the corresponding halo size, halo brightness and image brightness.
The halo size, the halo brightness and the image brightness of the reflected image can comprehensively reflect the roughness of the surface of the material, and the identification result is accurate by selecting the material with the halo size, the halo brightness and the image brightness which are closest to the halo size, the halo brightness and the image brightness of the reflected image from the candidate materials as the identification result.
Preferably, step a4 includes:
illuminating a target area with the white light illuminating lamp and acquiring a first reflection image of a white light illuminating area of the target area, which is acquired by the camera;
translating the white light illumination lamp by a first distance to illuminate the target area and acquiring a second reflection image of a white light illumination area of the target area acquired by the camera;
step a501 includes:
acquiring a first pixel diameter and a first pixel center coordinate of a minimum circumcircle of a halo region in the first reflection image; the halo region refers to a region surrounded by all pixel points with RGB values of (255, 255 and 255), and the minimum circumscribed circle is a circle with the smallest diameter which surrounds the halo region;
acquiring a second pixel center coordinate of a minimum circumcircle of a halo region in the second reflection image;
the pixel distance is calculated according to the following formula:
Figure 448847DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 185728DEST_PATH_IMAGE002
is the distance of the pixel from the pixel,
Figure 739200DEST_PATH_IMAGE003
Figure 658877DEST_PATH_IMAGE005
two pixel coordinate values respectively being the first pixel center coordinate,
Figure 292989DEST_PATH_IMAGE006
Figure 154766DEST_PATH_IMAGE007
two pixel coordinate values which are the central coordinates of the second pixel respectively;
halo size was calculated according to the following formula:
Figure 403390DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 313708DEST_PATH_IMAGE009
for the size of the halo to be said,
Figure 474431DEST_PATH_IMAGE010
in order to be said first distance, the first distance,
Figure 772688DEST_PATH_IMAGE011
is the first pixel diameter.
Preferably, step a501 comprises:
selecting at least four first reference points which are uniformly distributed along the first circumference on the first circumference, and calculating a first GRB average value of all the first reference points; the first circumference is a circumference which takes a point corresponding to the first pixel center coordinate as a circle center and takes the first pixel diameter as a diameter;
selecting at least four second reference points which are uniformly distributed along a second circumference on the second circumference, and calculating a second GRB average value of all the second reference points; the second circumference takes a point corresponding to the first pixel center coordinate as a circle center and has a diameter twice that of the first pixel;
halo brightness was calculated according to the following formula:
Figure 51485DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 342658DEST_PATH_IMAGE013
for the purpose of the halo luminance, the luminance,
Figure 436516DEST_PATH_IMAGE014
is the average value of the first GRB,
Figure 859669DEST_PATH_IMAGE015
is the second GRB average.
Preferably, step a501 comprises:
and calculating the average RGB value of the pixel points of the target area in the first reflection image as the image brightness.
Preferably, step a502 comprises:
and inputting the candidate material information of each region and the corresponding halo size, halo brightness and image brightness into a pre-trained recognition model to obtain a material type recognition result of each region.
In a second aspect, the present application provides an automotive interior cleaning device applied to a robot arm to clean an automotive interior; the mechanical arm comprises a camera and a white light irradiation lamp;
the automotive interior cleaning device includes:
the first acquisition module is used for acquiring the automobile interior image acquired by the camera;
the first identification module is used for segmenting the automobile interior image by utilizing a pre-trained deep learning network and acquiring the category information of each region;
the second acquisition module is used for acquiring candidate material information of each region according to the category information of each region;
a third acquisition module, configured to illuminate each of the areas with the white light illumination lamp and acquire a reflected image of the white light illumination area of each of the areas collected by the camera;
the second identification module is used for determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region;
the first execution module is used for determining the cleaning mode of each area according to the material type of each area;
and the second execution module is used for executing corresponding cleaning operation on each area according to the cleaning mode of each area.
The automotive interior cleaning device determines candidate material information according to the category information of each area, determines the material type of each area from the corresponding candidate material according to the white light reflection image of each area, and then performs cleaning operation of a corresponding cleaning mode on each area according to the material type of each area; compared with the traditional material identification mode adopting spectra, the automotive interior cleaning device does not need to use a spectrometer and a monochromatic light source when identifying the material, so that the cost is saved; meanwhile, the candidate material information of each region is determined firstly, and then the reflected light result is comprehensively analyzed, so that the dependence on the attribute of the reflected light is reduced, and the robustness is better; the material of each part of the automobile interior trim can be accurately identified, so that the corresponding cleaning mode is determined, and the automation degree is high.
In a third aspect, the present application provides an electronic device, comprising a processor and a memory, wherein the memory stores a computer program executable by the processor, and the processor executes the computer program to perform the steps of the automobile interior cleaning method as described above.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, performs the steps of the method for cleaning an interior of a vehicle as described above.
Has the advantages that:
according to the automobile interior cleaning method, the device, the electronic equipment and the storage medium, the candidate material information is determined according to the category information of each area, the material type of each area is determined from the corresponding candidate material according to the white light reflection image of each area, and then the cleaning operation of the corresponding cleaning mode is executed on each area according to the material type of each area; compared with the traditional material identification mode adopting spectra, the automobile interior cleaning method does not need a spectrometer and a monochromatic light source when identifying the material, so that the cost is saved; meanwhile, the candidate material information of each region is determined firstly, and then the reflected light result is comprehensively analyzed, so that the dependence on the attribute of the reflected light is reduced, and the robustness is better; the material of each part of the automobile interior trim can be accurately identified, so that the corresponding cleaning mode is determined, and the automation degree is high.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application.
Drawings
Fig. 1 is a flowchart of a method for cleaning an interior of an automobile according to an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of an automotive interior cleaning device according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 illustrates a method for cleaning an interior of an automobile, which is applied to a robot arm to clean the interior of the automobile, according to some embodiments of the present disclosure; the mechanical arm comprises a camera and a white light irradiation lamp;
the automobile interior cleaning method comprises the following steps:
A1. acquiring an automotive interior image acquired by a camera;
A2. segmenting the automobile interior image by utilizing a pre-trained deep learning network to acquire the category information of each region;
A3. acquiring candidate material information of each region according to the category information of each region;
A4. illuminating each area with a white light illuminating lamp and acquiring a reflected image of the white light illuminated area of each area collected by a camera;
A5. determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region;
A6. determining a cleaning mode of each area according to the material type of each area;
A7. and performing corresponding cleaning operation on each area according to the cleaning mode of each area.
The automotive interior cleaning method comprises the steps of determining candidate material information according to the category information of each area, determining the material type of each area from the corresponding candidate material according to the white light reflection image of each area, and executing cleaning operation of a corresponding cleaning mode on each area according to the material type of each area; compared with the traditional material identification mode adopting spectra, the automobile interior cleaning method does not need a spectrometer and a monochromatic light source when identifying the material, so that the cost is saved; meanwhile, the candidate material information of each region is determined firstly, and then the reflected light result is comprehensively analyzed, so that the dependence on the attribute of the reflected light is reduced, and the robustness is better; the material of each part of the automobile interior trim can be accurately identified, so that the corresponding cleaning mode is determined, and the automation degree is high.
When the automobile interior image is obtained, the automobile interior image can be collected through a camera under natural light conditions or under an indoor lighting environment, and the camera is driven by a mechanical arm to scan the automobile interior so as to obtain the image of each part of the automobile interior.
In some preferred embodiments, after step a1 and before step a4, further comprising the steps of:
A8. and sending a light-off command to turn off the ambient lighting.
In the embodiment, the automobile enters the special clean room for cleaning, when the automobile interior image collected by the camera is obtained, the image is shot under the lighting environment of the lighting lamp of the clean room, the control system of the mechanical arm can be in communication connection with the lighting system of the clean room, when the reflected image needs to be collected, the control system of the mechanical arm can send a lamp turning-off instruction to the lighting system of the clean room, so that the ambient lighting light is turned off, the interference of the ambient light can be reduced, the accuracy of material identification is improved, the correctness of the cleaning mode of each area is further ensured, and the damage of the interior due to the incorrect cleaning mode is avoided.
In practical applications, if the clean room has a light-permeable door or window, an automatic opening/closing shutter (e.g. an electric shutter) may be provided, and the control system of the robot arm is communicatively connected to the control system of the shutter, so that after step a1 and before step a4, the method further comprises the following steps:
A9. and sending a shading command to enable the shading device to shade the transparent door and/or window.
Therefore, the brightness in the clean room is further reduced, the interference of ambient light is avoided, and the accuracy of material identification is improved.
Specifically, in step a2, it is a conventional technique to segment an image including different components using a deep learning network trained in advance and identify the type of each segmented region, and the steps thereof will not be described in detail here. The categories of the respective areas include, for example, a center console, a seat, a floor, and the like.
Wherein the different types of areas are made of corresponding common materials, for example, for a chair, the materials are generally leather or fabric. Generating a candidate material query table in advance according to the common materials of the regions of each category, and recording the common materials of the regions of each category in the candidate material query table as corresponding candidate materials; in step a3, the candidate material look-up table is queried based on the type information of each region to obtain candidate material information of each region. By this step, the range of material identification contrast of each region can be reduced, and the identification efficiency can be improved.
In some preferred embodiments, the white light irradiation lamp is fixedly arranged on the camera, and when the reflected image is collected, the white light irradiation lamp can vertically irradiate the target area, so that the camera vertically shoots the reflected image of the target area, the collection angle of each reflected image is uniform, and the accuracy of the identification result is ensured. Wherein, in order to ensure that the white light irradiation lamp vertically irradiates the target area, step a4 includes:
A401. swinging the white light irradiation lamp around a first direction axis, and swinging the white light irradiation lamp to an angle with the highest reflected light intensity based on a climbing algorithm; the angle with the highest reflected light intensity refers to an angle which enables the maximum brightness value in a first image collected by a camera to be the highest, wherein the first image is an image collected by the camera in real time when the white light irradiation lamp swings around a first direction axis, and the maximum brightness of the first image refers to the brightness value of a pixel point with the maximum brightness in the first image; wherein the first direction axis is a direction axis perpendicular to the optical axis of the white light irradiation lamp;
A402. the white light irradiation lamp swings around a second direction axis, and swings to an angle with the highest reflected light intensity based on a climbing algorithm; the angle with the highest reflected light intensity refers to an angle which enables the maximum brightness value in a second image collected by the camera to be the highest, wherein the second image is an image collected by the camera in real time when the white light irradiation lamp swings around a second direction axis, and the maximum brightness of the second image refers to the brightness value of a pixel point with the maximum brightness in the second image; wherein the second direction axis is another direction axis perpendicular to the optical axis of the white light irradiation lamp.
Through the steps, the brightness of the image acquired by the camera is the highest, and the optical axis of the camera is basically vertical to the target area. The method comprises the following steps of swinging the white light irradiation lamp to the angle with the highest reflected light intensity based on a climbing algorithm:
s1, enabling a white light irradiation lamp to gradually swing (swing around a first direction axis or a second direction axis) by a preset step length (angle step length), acquiring an image (a first image or a second image) collected by a camera after each step of swing, and extracting the maximum brightness value of the image;
s2, respectively taking the swing angle of the next image and the swing angle of the previous image of the image with the highest maximum brightness value in the step S1 as a starting angle and an end angle, taking a half of a preset step length as an effective step length, enabling the white light irradiation lamp to swing reversely and gradually from the starting angle, acquiring an image collected by the camera after each step of swinging, and extracting the maximum brightness value of the image until the maximum brightness value reaches or exceeds the end angle;
and S3, taking the swinging angle corresponding to the image with the highest maximum brightness value in the step S2 as a target angle, and swinging the white light irradiation lamp to the target angle.
Furthermore, in order to further unify the white light irradiation conditions of the regions to improve the accuracy of the recognition result, the distances between the camera and the target region can be unified into a preset distance when the reflected image is collected; thus, after step a402, the method further comprises the steps of:
A403. and adjusting the position of the camera along the optical axis direction of the camera to enable the distance between the camera and the target area to be a preset distance.
Wherein the camera may be a 3d camera, such that the distance of the camera from the target area may be measured; if the camera does not have the distance measuring function, a distance measuring sensor (such as a laser distance measuring sensor) may be disposed on the camera to measure the distance between the camera and the target area. The camera is perpendicular to the target area, so that the position of the camera can be adjusted only along the direction of the optical axis of the camera, and the camera is guaranteed to be perpendicular to the target area all the time.
Preferably, step a5 includes:
A501. acquiring the size of a halo, the brightness of the halo and the brightness of an image of a reflection image of each area;
A502. and determining the material type of each region from the corresponding candidate material according to the candidate material information of each region and the corresponding halo size, halo brightness and image brightness.
The halo size, the halo brightness and the image brightness of the reflected image can comprehensively reflect the roughness of the surface of the material, and the identification result is accurate by selecting the material with the halo size, the halo brightness and the image brightness which are closest to the halo size, the halo brightness and the image brightness of the reflected image from the candidate materials as the identification result.
Preferably, step a4 includes:
illuminating a target area by using a white light illuminating lamp, and acquiring a first reflection image of a white light illuminating area of the target area, which is acquired by a camera;
translating the white light irradiation lamp by a first distance to irradiate the target area, and acquiring a second reflection image of the white light irradiation area of the target area, which is acquired by the camera;
step a501 includes:
acquiring a first pixel diameter and a first pixel center coordinate of a minimum circumcircle of a halo region in the first reflection image; the halo region refers to a region surrounded by all pixels with RGB values of (255, 255 and 255), and the minimum circumscribed circle is a circle with the smallest diameter which surrounds the halo region;
acquiring a second pixel center coordinate of a minimum circumcircle of a halo region in the second reflection image;
the pixel distance is calculated according to the following formula:
Figure 124297DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 720626DEST_PATH_IMAGE002
is the distance of a pixel from the pixel,
Figure 403411DEST_PATH_IMAGE003
Figure 823897DEST_PATH_IMAGE005
two pixel coordinate values respectively being the center coordinates of the first pixel,
Figure 857712DEST_PATH_IMAGE006
Figure 992152DEST_PATH_IMAGE007
two pixel coordinate values which are respectively the center coordinates of the second pixel;
halo size was calculated according to the following formula:
Figure 263865DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 589673DEST_PATH_IMAGE009
the size of the halo is the size of the halo,
Figure 110784DEST_PATH_IMAGE010
the first distance is a distance between the first and second electrodes,
Figure 517757DEST_PATH_IMAGE011
is the first pixel diameter.
In step a4, the first reflection image and the second reflection image are sequentially obtained with each region as a target region, so that the size of the halo is calculated from each of the first reflection image and the second reflection image, and the size of the halo of each region can be obtained. Wherein the first distance
Figure 955560DEST_PATH_IMAGE010
Can be preset according to actual needs.
Further, step a501 includes:
selecting at least four first reference points which are uniformly distributed along the first circumference on the first circumference, and calculating a first GRB average value of all the first reference points; the first circle is a circle taking a point corresponding to the first pixel center coordinate as the center of a circle and taking the first pixel diameter as the diameter;
selecting at least four second reference points which are uniformly distributed along the second circumference on the second circumference, and calculating a second GRB average value of all the second reference points; the second circumference is a circumference which takes a point corresponding to the first pixel center coordinate as a circle center and has a diameter twice that of the first pixel;
halo brightness was calculated according to the following formula:
Figure 203002DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 165404DEST_PATH_IMAGE013
in order to obtain the brightness of the halo,
Figure 156494DEST_PATH_IMAGE014
is the average value of the first GRB,
Figure 917645DEST_PATH_IMAGE015
second GRB average.
For example, four points on the first circumference directly above, directly below, directly to the left, and directly to the right of the center of the circle may be selected as the first reference points, and four points on the second circumference directly above, directly below, directly to the left, and directly to the right of the center of the circle may be selected as the second reference points. But the number and distribution positions of the first reference points and the second reference points are not limited thereto.
Wherein the first GRB average may be calculated by the following formula:
Figure 36123DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 515514DEST_PATH_IMAGE018
Figure 733131DEST_PATH_IMAGE019
Figure 365101DEST_PATH_IMAGE020
r, G, B channel values for the ith first reference point, respectively, and n is the total number of first reference points.
Wherein the second GRB average may be calculated by the following equation:
Figure 469192DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 642072DEST_PATH_IMAGE022
Figure 974964DEST_PATH_IMAGE023
Figure 913970DEST_PATH_IMAGE024
r, G, B channel values for the ith second reference point, respectively, and m is the total number of second reference points.
Further, step a501 includes:
and calculating the average RGB value of the pixel points of the target area in the first reflection image as the image brightness.
The average RGB value of all pixel points in the target region in the first reflection image may be calculated as the image brightness. Or randomly selecting a preset number (such as 100, but not limited to) of pixel points in the target area in the first reflection image and calculating the average RGB value as the image brightness. The method for calculating the RGB average value may refer to the above calculation method of the first GRB average value.
In some embodiments, step a502 comprises:
and inputting the candidate material information of each region and the corresponding halo size, halo brightness and image brightness into a pre-trained recognition model to obtain a material type recognition result of each region.
A plurality of marked pictures can be used in advance to train a recognition model by using a decision tree, then information (candidate material information and corresponding halo size, halo brightness and image brightness) needing to be recognized at this time is input into the model, and a material type recognition result of each region is output.
In other embodiments, step a502 comprises:
sequentially executing the following steps by taking each area as a target area:
acquiring standard halo size, standard halo brightness and standard image brightness of each candidate material of a target area;
calculating the deviation (as an absolute value) between the halo size, the halo brightness and the image brightness of the target area and the standard halo size, the standard halo brightness and the standard image brightness of each candidate material;
calculating the matching value of each candidate material according to the following formula:
Figure 159269DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 160723DEST_PATH_IMAGE026
the matching value of the i-th candidate material,
Figure 484257DEST_PATH_IMAGE027
the deviation between the halo size of the target region and the standard halo size of the i-th candidate material,
Figure 356398DEST_PATH_IMAGE028
the deviation between the halo brightness of the target area and the standard halo brightness of the i-th candidate material,
Figure DEST_PATH_IMAGE029
the deviation between the image brightness of the target area and the standard image brightness of the ith candidate material is taken as the standard image brightness; a. b and c are respectively preset weight values;
and taking the material type of the candidate material with the maximum matching value as the material type of the target area.
The cleaning mode corresponding to each material type can be set according to actual needs, for example, for leather, the cleaning mode is a leather cleaning liquid wiping mode (the corresponding cleaning operation is to wipe a leather area by using a special leather cleaning liquid); for plastics, the cleaning mode is a common wiping mode (the corresponding cleaning operation is to wipe the plastic area by clean water); for the fabric, the cleaning mode is a dust suction mode (corresponding to the cleaning operation that the dust suction device is used for performing dust suction treatment on the fabric area); but the cleaning mode is not limited thereto. A cleaning mode lookup table may be generated in advance, and a corresponding cleaning operation program script may be generated for each cleaning mode, where the cleaning mode lookup table records a mapping relationship between each material type and each cleaning mode, and in step a6, the cleaning mode lookup table is queried according to the material type of each region to obtain the cleaning mode of each region. In step a7, a cleaning operation program script corresponding to the cleaning mode of each region is called, so that the robot arm performs a corresponding cleaning operation.
In view of the above, the method for cleaning the interior of the automobile acquires the image of the interior of the automobile collected by the camera; segmenting the automobile interior image by utilizing a pre-trained deep learning network to acquire the category information of each region; acquiring candidate material information of each region according to the category information of each region; illuminating each area with a white light illuminating lamp and acquiring a reflected image of the white light illuminated area of each area collected by a camera; determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region; determining a cleaning mode of each area according to the material type of each area; executing corresponding cleaning operation on each area according to the cleaning mode of each area; compared with the traditional material identification mode adopting spectra, the automobile interior cleaning method does not need a spectrometer and a monochromatic light source when identifying the material, so that the cost is saved; meanwhile, the candidate material information of each region is determined firstly, and then the reflected light result is comprehensively analyzed, so that the dependence on the attribute of the reflected light is reduced, and the robustness is better; the material of each part of the automobile interior trim can be accurately identified, so that the corresponding cleaning mode is determined, and the automation degree is high.
Referring to fig. 2, the present application provides an automotive interior cleaning device applied to a robot arm to clean an automotive interior; the mechanical arm comprises a camera and a white light irradiation lamp;
this automotive interior cleaning device includes:
a first acquisition module 1 for acquiring an automotive interior image acquired by a camera;
the first recognition module 2 is used for segmenting the automobile interior image by utilizing a pre-trained deep learning network and acquiring the category information of each region;
the second obtaining module 3 is configured to obtain candidate material information of each region according to the category information of each region;
a third acquisition module 4 for illuminating each area with a white light lamp and acquiring a reflected image of the white light illuminated area of each area collected by the camera;
the second identification module 5 is used for determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region;
the first execution module 6 is used for determining the cleaning mode of each area according to the material type of each area;
and the second execution module 7 is used for executing corresponding cleaning operation on each area according to the cleaning mode of each area.
When the automobile interior image is obtained, the automobile interior image can be collected through a camera under natural light conditions or under an indoor lighting environment, and the camera is driven by a mechanical arm to scan the automobile interior so as to obtain the image of each part of the automobile interior.
In some preferred embodiments, the automotive interior cleaning device further comprises:
and the light turning-off module is used for sending a light turning-off instruction to turn off the ambient lighting light when the reflected image needs to be collected.
In the embodiment, the automobile enters the special clean room for cleaning, when the automobile interior image collected by the camera is obtained, the image is shot under the lighting environment of the lighting lamp of the clean room, the control system of the mechanical arm can be in communication connection with the lighting system of the clean room, when the reflected image needs to be collected, the control system of the mechanical arm can send a lamp turning-off instruction to the lighting system of the clean room, so that the ambient lighting light is turned off, the interference of the ambient light can be reduced, the accuracy of material identification is improved, the correctness of the cleaning mode of each area is further ensured, and the damage of the interior due to the incorrect cleaning mode is avoided.
In practical applications, if the clean room has a light-permeable door or window, an automatic opening/closing light-shielding device (e.g., an electric light-shielding roller shutter) may be disposed, and the control system of the mechanical arm is communicatively connected to the control system of the light-shielding device, so that the automotive interior cleaning device further includes:
and the shading module is used for sending a shading instruction when the reflected image needs to be collected, so that the shading device shades a transparent door and/or window.
Therefore, the brightness in the clean room is further reduced, the interference of ambient light is avoided, and the accuracy of material identification is improved.
Specifically, it is a conventional technique to segment an image including different components by using a deep learning network trained in advance and identify the type of each segmented region, and the steps thereof will not be described in detail here. The categories of the respective areas include, for example, a center console, a seat, a floor, and the like.
Wherein the different types of areas are made of corresponding common materials, for example, for a chair, the materials are generally leather or fabric. Generating a candidate material query table in advance according to the common materials of the regions of each category, and recording the common materials of the regions of each category in the candidate material query table as corresponding candidate materials; the second obtaining module 3 is configured to, when obtaining the candidate material information of each region according to the category information of each region, perform a query in the candidate material lookup table according to the category information of each region, so as to obtain the candidate material information of each region. Through the technical scheme, the material identification contrast range of each area can be reduced, so that the identification efficiency is improved.
In some preferred embodiments, the white light irradiation lamp is fixedly arranged on the camera, and when the reflected image is collected, the white light irradiation lamp can vertically irradiate the target area, so that the camera vertically shoots the reflected image of the target area, the collection angle of each reflected image is uniform, and the accuracy of the identification result is ensured. Wherein, to ensure that the white light irradiation lamp vertically irradiates the target area, the third obtaining module 4 is configured to, when the white light irradiation lamp is used to irradiate each area and obtain the reflection image of the white light irradiation area of each area collected by the camera, perform:
swinging the white light irradiation lamp around a first direction axis, and swinging the white light irradiation lamp to an angle with the highest reflected light intensity based on a climbing algorithm; the angle with the highest reflected light intensity refers to an angle which enables the maximum brightness value in a first image collected by a camera to be the highest, wherein the first image is an image collected by the camera in real time when the white light irradiation lamp swings around a first direction axis, and the maximum brightness of the first image refers to the brightness value of a pixel point with the maximum brightness in the first image; wherein the first direction axis is a direction axis perpendicular to the optical axis of the white light irradiation lamp;
the white light irradiation lamp swings around a second direction axis, and swings to an angle with the highest reflected light intensity based on a climbing algorithm; the angle with the highest reflected light intensity refers to an angle which enables the maximum brightness value in a second image collected by the camera to be the highest, wherein the second image is an image collected by the camera in real time when the white light irradiation lamp swings around a second direction axis, and the maximum brightness of the second image refers to the brightness value of a pixel point with the maximum brightness in the second image; wherein the second direction axis is another direction axis perpendicular to the optical axis of the white light irradiation lamp.
Through the technical scheme, the brightness of the image acquired by the camera is the highest, and at the moment, the optical axis of the camera is basically vertical to the target area. The method comprises the following steps of swinging the white light irradiation lamp to the angle with the highest reflected light intensity based on a climbing algorithm:
s1, enabling a white light irradiation lamp to gradually swing (swing around a first direction axis or a second direction axis) by a preset step length (angle step length), acquiring an image (a first image or a second image) collected by a camera after each step of swing, and extracting the maximum brightness value of the image;
s2, respectively taking the swing angle of the next image and the swing angle of the previous image of the image with the highest maximum brightness value in the step S1 as a starting angle and an end angle, taking a half of a preset step length as an effective step length, enabling the white light irradiation lamp to swing reversely and gradually from the starting angle, acquiring an image collected by the camera after each step of swinging, and extracting the maximum brightness value of the image until the maximum brightness value reaches or exceeds the end angle;
and S3, taking the swinging angle corresponding to the image with the highest maximum brightness value in the step S2 as a target angle, and swinging the white light irradiation lamp to the target angle.
Furthermore, in order to further unify the white light irradiation conditions of the regions to improve the accuracy of the recognition result, the distances between the camera and the target region can be unified into a preset distance when the reflected image is collected; the third acquisition module 4 thus performs, when illuminating each area with the white light lamp and acquiring the reflected image of the white light illuminated area of each area acquired by the camera:
and adjusting the position of the camera along the optical axis direction of the camera to enable the distance between the camera and the target area to be a preset distance.
Wherein the camera may be a 3d camera, such that the distance of the camera from the target area may be measured; if the camera does not have the distance measuring function, a distance measuring sensor (such as a laser distance measuring sensor) may be disposed on the camera to measure the distance between the camera and the target area. The camera is perpendicular to the target area, so that the position of the camera can be adjusted only along the direction of the optical axis of the camera, and the camera is guaranteed to be perpendicular to the target area all the time.
Preferably, the second identifying module 5 is configured to, when determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region, perform:
acquiring the size of a halo, the brightness of the halo and the brightness of an image of a reflection image of each area;
and determining the material type of each region from the corresponding candidate material according to the candidate material information of each region and the corresponding halo size, halo brightness and image brightness.
The halo size, the halo brightness and the image brightness of the reflected image can comprehensively reflect the roughness of the surface of the material, and the identification result is accurate by selecting the material with the halo size, the halo brightness and the image brightness which are closest to the halo size, the halo brightness and the image brightness of the reflected image from the candidate materials as the identification result.
Preferably, the third obtaining module 4 is configured to, when illuminating each area with a white light lamp and obtaining a reflection image of the white light illuminated area of each area captured by the camera, perform:
illuminating a target area by using a white light illuminating lamp, and acquiring a first reflection image of a white light illuminating area of the target area, which is acquired by a camera;
translating the white light irradiation lamp by a first distance to irradiate the target area, and acquiring a second reflection image of the white light irradiation area of the target area, which is acquired by the camera;
the second identifying module 5 is configured to, when obtaining the halo size of the reflection image of the target area, perform:
acquiring a first pixel diameter and a first pixel center coordinate of a minimum circumcircle of a halo region in the first reflection image; the halo region refers to a region surrounded by all pixels with RGB values of (255, 255 and 255), and the minimum circumscribed circle is a circle with the smallest diameter which surrounds the halo region;
acquiring a second pixel center coordinate of a minimum circumcircle of a halo region in the second reflection image;
the pixel distance is calculated according to the following formula:
Figure 162811DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 854824DEST_PATH_IMAGE002
is the distance of a pixel from the pixel,
Figure 217934DEST_PATH_IMAGE003
Figure 193850DEST_PATH_IMAGE005
two pixel coordinate values respectively being the center coordinates of the first pixel,
Figure 972494DEST_PATH_IMAGE006
Figure 355065DEST_PATH_IMAGE007
two pixel coordinate values which are respectively the center coordinates of the second pixel;
halo size was calculated according to the following formula:
Figure 82719DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 617868DEST_PATH_IMAGE009
the size of the halo is the size of the halo,
Figure 156296DEST_PATH_IMAGE010
the first distance is a distance between the first and second electrodes,
Figure 72169DEST_PATH_IMAGE011
is the first pixel diameter.
It should be noted that the third obtaining module 4 sequentially obtains the first reflection image and the second reflection image with each region as a target region, so that the second identifying module 5 calculates the size of the halo according to each first reflection image and each second reflection image, and can obtain the size of the halo of each region. Wherein the first distance
Figure 557508DEST_PATH_IMAGE010
Can be preset according to actual needs.
Further, the second identifying module 5 is configured to, when obtaining the halo brightness of the reflection image of the target area, perform:
selecting at least four first reference points which are uniformly distributed along the first circumference on the first circumference, and calculating a first GRB average value of all the first reference points; the first circle is a circle taking a point corresponding to the first pixel center coordinate as the center of a circle and taking the first pixel diameter as the diameter;
selecting at least four second reference points which are uniformly distributed along the second circumference on the second circumference, and calculating a second GRB average value of all the second reference points; the second circumference is a circumference which takes a point corresponding to the first pixel center coordinate as a circle center and has a diameter twice that of the first pixel;
halo brightness was calculated according to the following formula:
Figure 478321DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 109023DEST_PATH_IMAGE013
in order to obtain the brightness of the halo,
Figure 341552DEST_PATH_IMAGE014
is the average value of the first GRB,
Figure 850156DEST_PATH_IMAGE015
second GRB average.
For example, four points on the first circumference directly above, directly below, directly to the left, and directly to the right of the center of the circle may be selected as the first reference points, and four points on the second circumference directly above, directly below, directly to the left, and directly to the right of the center of the circle may be selected as the second reference points. But the number and distribution positions of the first reference points and the second reference points are not limited thereto.
Wherein the first GRB average may be calculated by the following formula:
Figure 124012DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 941926DEST_PATH_IMAGE018
Figure 333855DEST_PATH_IMAGE019
Figure 410265DEST_PATH_IMAGE020
r, G, B channel values for the ith first reference point, respectively, and n is the total number of first reference points.
Wherein the second GRB average may be calculated by the following equation:
Figure 820517DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 825645DEST_PATH_IMAGE022
Figure 406668DEST_PATH_IMAGE023
Figure 37500DEST_PATH_IMAGE024
r, G, B channel values for the ith second reference point, respectively, and m is the total number of second reference points.
Further, the second recognition module 5 is configured to, when acquiring the image brightness of the reflection image of the target area, perform:
and calculating the average RGB value of the pixel points of the target area in the first reflection image as the image brightness.
The average RGB value of all pixel points in the target region in the first reflection image may be calculated as the image brightness. Or randomly selecting a preset number (such as 100, but not limited to) of pixel points in the target area in the first reflection image and calculating the average RGB value as the image brightness. The method for calculating the RGB average value may refer to the above calculation method of the first GRB average value.
In some embodiments, the second identifying module 5 is configured to, when determining the material type of each region from the corresponding candidate material according to the candidate material information of each region and the corresponding halo size, halo brightness and image brightness, perform:
and inputting the candidate material information of each region and the corresponding halo size, halo brightness and image brightness into a pre-trained recognition model to obtain a material type recognition result of each region.
In other embodiments, the second identifying module 5 is configured to, when determining the material type of each region from the corresponding candidate material according to the candidate material information of each region and the corresponding halo size, halo brightness and image brightness, perform:
sequentially executing the following steps by taking each area as a target area:
acquiring standard halo size, standard halo brightness and standard image brightness of each candidate material of a target area;
calculating the deviation (as an absolute value) between the halo size, the halo brightness and the image brightness of the target area and the standard halo size, the standard halo brightness and the standard image brightness of each candidate material;
calculating the matching value of each candidate material according to the following formula:
Figure 787413DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 727556DEST_PATH_IMAGE026
the matching value of the i-th candidate material,
Figure 246742DEST_PATH_IMAGE027
the deviation between the halo size of the target region and the standard halo size of the i-th candidate material,
Figure 478003DEST_PATH_IMAGE028
the deviation between the halo brightness of the target area and the standard halo brightness of the i-th candidate material,
Figure 49798DEST_PATH_IMAGE029
is a standard graph of the image brightness and the i-th candidate material of the target areaDeviation between image brightness; a. b and c are respectively preset weight values;
and taking the material type of the candidate material with the maximum matching value as the material type of the target area.
The cleaning mode corresponding to each material type can be set according to actual needs, for example, for leather, the cleaning mode is a leather cleaning liquid wiping mode (the corresponding cleaning operation is to wipe a leather area by using a special leather cleaning liquid); for plastics, the cleaning mode is a common wiping mode (the corresponding cleaning operation is to wipe the plastic area by clean water); for the fabric, the cleaning mode is a dust suction mode (corresponding to the cleaning operation that the dust suction device is used for performing dust suction treatment on the fabric area); but the cleaning mode is not limited thereto. A cleaning mode lookup table may be generated in advance, and a corresponding cleaning operation program script may be generated for each cleaning mode, where the cleaning mode lookup table records a mapping relationship between each material type and each cleaning mode, so that the first execution module 6 is configured to execute, when determining the cleaning mode of each region according to the material type of each region: and inquiring in a cleaning mode inquiry table according to the material type of each area to obtain the cleaning mode of each area. The second executing module 7 is configured to, when performing the corresponding cleaning operation on each area according to the cleaning mode of each area, execute: and calling the corresponding cleaning operation program script according to the cleaning mode of each area, so that the mechanical arm executes the corresponding cleaning operation.
In view of the above, the automotive interior cleaning device acquires the automotive interior image acquired by the camera; segmenting the automobile interior image by utilizing a pre-trained deep learning network to acquire the category information of each region; acquiring candidate material information of each region according to the category information of each region; illuminating each area with a white light illuminating lamp and acquiring a reflected image of the white light illuminated area of each area collected by a camera; determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region; determining a cleaning mode of each area according to the material type of each area; executing corresponding cleaning operation on each area according to the cleaning mode of each area; compared with the traditional material identification mode adopting spectra, the automotive interior cleaning device does not need to use a spectrometer and a monochromatic light source when identifying the material, so that the cost is saved; meanwhile, the candidate material information of each region is determined firstly, and then the reflected light result is comprehensively analyzed, so that the dependence on the attribute of the reflected light is reduced, and the robustness is better; the material of each part of the automobile interior trim can be accurately identified, so that the corresponding cleaning mode is determined, and the automation degree is high.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, the electronic device includes: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing a computer program executable by the processor 301, the processor 301 executing the computer program when the electronic device is running to perform the automotive interior cleaning method in any of the alternative implementations of the above embodiments to implement the following functions: acquiring an automotive interior image acquired by a camera; segmenting the automobile interior image by utilizing a pre-trained deep learning network to acquire the category information of each region; acquiring candidate material information of each region according to the category information of each region; illuminating each area with a white light illuminating lamp and acquiring a reflected image of the white light illuminated area of each area collected by a camera; determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region; determining a cleaning mode of each area according to the material type of each area; and performing corresponding cleaning operation on each area according to the cleaning mode of each area.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for cleaning an interior trim of an automobile in any optional implementation manner of the foregoing embodiment is executed, so as to implement the following functions: acquiring an automotive interior image acquired by a camera; segmenting the automobile interior image by utilizing a pre-trained deep learning network to acquire the category information of each region; acquiring candidate material information of each region according to the category information of each region; illuminating each area with a white light illuminating lamp and acquiring a reflected image of the white light illuminated area of each area collected by a camera; determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region; determining a cleaning mode of each area according to the material type of each area; and performing corresponding cleaning operation on each area according to the cleaning mode of each area. The computer readable storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. A method for cleaning the interior of an automobile is applied to a mechanical arm to clean the interior of the automobile; the mechanical arm is characterized by comprising a camera and a white light irradiation lamp;
the automobile interior cleaning method comprises the following steps:
A1. acquiring an automotive interior image acquired by the camera;
A2. segmenting the automobile interior image by utilizing a pre-trained deep learning network to acquire the category information of each region;
A3. acquiring candidate material information of each region according to the category information of each region;
A4. illuminating each of the regions using the white light illumination lamp and acquiring a reflected image of the white light illuminated area of each of the regions collected by the camera;
A5. determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region;
A6. determining a cleaning mode of each region according to the material type of each region;
A7. performing a corresponding cleaning operation on each of the regions according to the cleaning mode of each of the regions;
step a5 includes:
A501. acquiring the halo size, halo brightness and image brightness of the reflection image of each region;
A502. and determining the material type of each region from the corresponding candidate material according to the candidate material information of each region and the corresponding halo size, halo brightness and image brightness.
2. The method for cleaning the interior of the automobile as claimed in claim 1, wherein after step a1 and before step a4, the method further comprises the steps of:
A8. and sending a light-off command to turn off the ambient lighting.
3. The method for cleaning the interior of the automobile as claimed in claim 1, wherein step a4 includes:
illuminating a target area with the white light illuminating lamp and acquiring a first reflection image of a white light illuminating area of the target area, which is acquired by the camera;
translating the white light illumination lamp by a first distance to illuminate the target area and acquiring a second reflection image of a white light illumination area of the target area acquired by the camera;
step a501 includes:
acquiring a first pixel diameter and a first pixel center coordinate of a minimum circumcircle of a halo region in the first reflection image; the halo region refers to a region surrounded by all pixel points with RGB values of (255, 255 and 255), and the minimum circumscribed circle is a circle with the smallest diameter which surrounds the halo region;
acquiring a second pixel center coordinate of a minimum circumcircle of a halo region in the second reflection image;
the pixel distance is calculated according to the following formula:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE002
is the distance of the pixel from the pixel,
Figure DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE004
two pixel coordinate values respectively being the first pixel center coordinate,
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
two pixel coordinate values which are the central coordinates of the second pixel respectively;
halo size was calculated according to the following formula:
Figure DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE008
for the size of the halo to be said,
Figure DEST_PATH_IMAGE009
in order to be said first distance, the first distance,
Figure DEST_PATH_IMAGE010
is the first pixel diameter.
4. The method of cleaning interior trim of claim 3, wherein the step A501 comprises:
selecting at least four first reference points which are uniformly distributed along the first circumference on the first circumference, and calculating a first GRB average value of all the first reference points; the first circumference is a circumference which takes a point corresponding to the first pixel center coordinate as a circle center and takes the first pixel diameter as a diameter;
selecting at least four second reference points which are uniformly distributed along a second circumference on the second circumference, and calculating a second GRB average value of all the second reference points; the second circumference takes a point corresponding to the first pixel center coordinate as a circle center and has a diameter twice that of the first pixel;
halo brightness was calculated according to the following formula:
Figure DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE012
for the purpose of the halo luminance, the luminance,
Figure DEST_PATH_IMAGE013
is the average value of the first GRB,
Figure DEST_PATH_IMAGE014
is the second GRB average.
5. The method of cleaning interior trim of claim 3, wherein the step A501 comprises:
and calculating the average RGB value of the pixel points of the target area in the first reflection image as the image brightness.
6. The method of claim 1, wherein step a502 comprises:
and inputting the candidate material information of each region and the corresponding halo size, halo brightness and image brightness into a pre-trained recognition model to obtain a material type recognition result of each region.
7. A cleaning device for automobile interior trim is applied to a mechanical arm to clean the automobile interior trim; the mechanical arm is characterized by comprising a camera and a white light irradiation lamp;
the automotive interior cleaning device includes:
the first acquisition module is used for acquiring the automobile interior image acquired by the camera;
the first identification module is used for segmenting the automobile interior image by utilizing a pre-trained deep learning network and acquiring the category information of each region;
the second acquisition module is used for acquiring candidate material information of each region according to the category information of each region;
a third acquisition module, configured to illuminate each of the areas with the white light illumination lamp and acquire a reflected image of the white light illumination area of each of the areas collected by the camera;
the second identification module is used for determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region;
the first execution module is used for determining the cleaning mode of each area according to the material type of each area;
the second execution module is used for executing corresponding cleaning operation on each area according to the cleaning mode of each area;
the second identification module is used for executing the following steps when determining the material type of each region from the corresponding candidate material according to the reflection image and the candidate material information of each region:
acquiring the halo size, halo brightness and image brightness of the reflection image of each region;
and determining the material type of each region from the corresponding candidate material according to the candidate material information of each region and the corresponding halo size, halo brightness and image brightness.
8. An electronic device, comprising a processor and a memory, wherein the memory stores a computer program executable by the processor, and the processor executes the computer program to perform the steps of the method for cleaning an interior of a vehicle according to any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, executes the steps of the method for cleaning an interior of a vehicle according to any one of claims 1 to 6.
CN202111259091.7A 2021-10-28 2021-10-28 Automobile interior cleaning method and device, electronic equipment and storage medium Active CN113705544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111259091.7A CN113705544B (en) 2021-10-28 2021-10-28 Automobile interior cleaning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111259091.7A CN113705544B (en) 2021-10-28 2021-10-28 Automobile interior cleaning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113705544A CN113705544A (en) 2021-11-26
CN113705544B true CN113705544B (en) 2022-02-08

Family

ID=78647208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111259091.7A Active CN113705544B (en) 2021-10-28 2021-10-28 Automobile interior cleaning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113705544B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4473409A (en) * 1982-06-02 1984-09-25 Greeley Jackie D Apparatus and method for cleaning vehicle interiors
CN108875568A (en) * 2017-05-12 2018-11-23 福特全球技术公司 Vehicle stain and rubbish detection system and method
WO2019004618A1 (en) * 2017-06-27 2019-01-03 엘지전자 주식회사 Method for traveling on basis of characteristics of traveling surface, and robot for implementing same
CN109562745A (en) * 2016-08-18 2019-04-02 大众汽车有限公司 Method and apparatus for cleaning the inner space of motor vehicle
CN110403529A (en) * 2019-06-21 2019-11-05 安克创新科技股份有限公司 Self-moving device and ground Material Identification method
CN110448225A (en) * 2019-07-01 2019-11-15 深圳拓邦股份有限公司 A kind of method of adjustment, system and cleaning equipment cleaning strategy
CN112087480A (en) * 2019-06-13 2020-12-15 福特全球技术公司 Vehicle maintenance
CN112232399A (en) * 2020-10-10 2021-01-15 南京埃斯顿机器人工程有限公司 Automobile seat defect detection method based on multi-feature fusion machine learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE788560A (en) * 1972-06-09 1973-03-08 Du Pont PROTECTION AGAINST HALO IN IMAGE FORMING IN MULTI-LAYER PHOTOPOLYMERS
DE19829759A1 (en) * 1998-07-03 2000-01-05 Itt Mfg Enterprises Inc Wiper arm for a system for cleaning a window

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4473409A (en) * 1982-06-02 1984-09-25 Greeley Jackie D Apparatus and method for cleaning vehicle interiors
CN109562745A (en) * 2016-08-18 2019-04-02 大众汽车有限公司 Method and apparatus for cleaning the inner space of motor vehicle
CN108875568A (en) * 2017-05-12 2018-11-23 福特全球技术公司 Vehicle stain and rubbish detection system and method
WO2019004618A1 (en) * 2017-06-27 2019-01-03 엘지전자 주식회사 Method for traveling on basis of characteristics of traveling surface, and robot for implementing same
CN112087480A (en) * 2019-06-13 2020-12-15 福特全球技术公司 Vehicle maintenance
CN110403529A (en) * 2019-06-21 2019-11-05 安克创新科技股份有限公司 Self-moving device and ground Material Identification method
CN110448225A (en) * 2019-07-01 2019-11-15 深圳拓邦股份有限公司 A kind of method of adjustment, system and cleaning equipment cleaning strategy
CN112232399A (en) * 2020-10-10 2021-01-15 南京埃斯顿机器人工程有限公司 Automobile seat defect detection method based on multi-feature fusion machine learning

Also Published As

Publication number Publication date
CN113705544A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
US10746763B2 (en) Apparatus and method for diagnosing electric power equipment using thermal imaging camera
CN116907458A (en) System and method for indoor vehicle navigation based on optical target
US20140026762A1 (en) Cooking device and procedure for cooking food
CN101097599B (en) Biometrics device
CN111612737B (en) Artificial board surface flaw detection device and detection method
CN110998257B (en) Vehicle color measurement method and device
CN104541303B (en) For the method for carrying out nondestructive test to blade pre-form
CN111239768A (en) Method for automatically constructing map and searching inspection target by electric power inspection robot
JP2010509666A (en) Image illumination detection
CN115482465A (en) Crop disease and insect pest prediction method and system based on machine vision and storage medium
CN113705544B (en) Automobile interior cleaning method and device, electronic equipment and storage medium
JP5403779B2 (en) Lighting system
CN109540892A (en) Duck variety discriminating method and system
CN112329893A (en) Data-driven heterogeneous multi-target intelligent detection method and system
CN114760739B (en) Intelligent lighting lamp control system and method based on environmental information
US20130093877A1 (en) Device And Method For Identifying Anomalies On Instruments
CN109828891B (en) Fault indicator lamp identification method
CN113614774A (en) Method and system for defect detection in image data of target coating
CN113762122B (en) Raindrop detection algorithm based on stroboscopic photo
WO2018218437A1 (en) Method and system for illuminating target to be detected, and storage device
KR102105066B1 (en) Vision checkup method for skeleton precision
CN111639638B (en) System and method for identifying transparent flat plate
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN112571409B (en) Robot control method based on visual SLAM, robot and medium
CN209013288U (en) A kind of kitchen ventilator having filtering functions vision-based detection module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant