CN113824864A - Food material storage device and image processing method - Google Patents

Food material storage device and image processing method Download PDF

Info

Publication number
CN113824864A
CN113824864A CN202111070757.4A CN202111070757A CN113824864A CN 113824864 A CN113824864 A CN 113824864A CN 202111070757 A CN202111070757 A CN 202111070757A CN 113824864 A CN113824864 A CN 113824864A
Authority
CN
China
Prior art keywords
internal
image
areas
images
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111070757.4A
Other languages
Chinese (zh)
Inventor
孔祥键
刘照光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202111070757.4A priority Critical patent/CN113824864A/en
Publication of CN113824864A publication Critical patent/CN113824864A/en
Priority to CN202280042647.3A priority patent/CN117501056A/en
Priority to PCT/CN2022/078407 priority patent/WO2022267518A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D29/00Arrangement or mounting of control or safety devices
    • F25D29/003Arrangement or mounting of control or safety devices for movable devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D2400/00General features of, or devices for refrigerators, cold rooms, ice-boxes, or for cooling or freezing apparatus not covered by any other subclass
    • F25D2400/36Visual displays

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Thermal Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)

Abstract

The application discloses food material storage equipment and an image processing method, and internal scene images of different layer object areas are obtained; acquiring reference alignment positions of fixed reference objects in the internal view images of different layered object areas; and performing alignment correction on the background images of the different layered object areas according to the reference alignment position. According to the food material storage equipment and the image processing method, the relative positions of the fixed reference objects longitudinally spanning all the layer object areas after imaging are consistent, the problem of fault of internal view images of different layer object areas is solved, and the imaging display effect of the internal view images of the storage room is optimized.

Description

Food material storage device and image processing method
Technical Field
The application relates to the technical field of food material storage equipment. More particularly, the present invention relates to a food material storage device and an image processing method.
Background
At present, aiming at food storage equipment products, a user can check food materials stored in a storage chamber of the food storage equipment irregularly, the check of the food materials requires the user to perform the opening and closing actions of a door of the food storage equipment, so that the check process of the food materials in the storage chamber of the food storage equipment is complicated, the power consumption of the door of the food storage equipment is increased by frequently opening and closing the door of the food storage equipment, the working power consumption of the food storage equipment is increased, and the mainstream development trend of electrical equipment energy saving is not met.
Disclosure of Invention
The embodiment of the application provides food material storage equipment and an image processing method, and aims to optimize imaging display effect of storage room interior images.
In a first aspect, the present application provides a food material storage apparatus, comprising;
the refrigerator comprises a refrigerator body, a storage chamber and a fixed reference object, wherein the refrigerator body is internally provided with the storage chamber for storing food materials, the storage chamber is internally provided with the storage partition board and the fixed reference object, the storage chamber is divided into at least two layers of object placing areas by the storage partition board, and the fixed reference object longitudinally spans each layer of object placing area in the storage chamber;
the door is arranged at the opening of the storage chamber;
the cameras are arranged on one side of the box door positioned in the storage chamber, are in one-to-one correspondence with the object placing areas, and are used for acquiring internal images of object placing areas on different layers in the storage chamber when the box door is closed;
a controller connected with the camera and configured to:
acquiring internal scene images of different layered object areas;
acquiring reference alignment positions of the fixed reference objects in the internal view images of the different layered object areas;
carrying out alignment correction on the internal scene images of different layered object areas according to the reference alignment position;
and the display is arranged on the outer side of the box door and used for displaying the internal view images of the aligned different layered object areas.
In a second aspect, the present application further provides an image processing method, including:
controlling a camera to collect internal images of different layered object placing areas in a storage chamber of the food storage equipment; the camera is arranged on one side, positioned inside the storage chamber, of the door of the food material storage equipment and is in one-to-one correspondence with the storage areas;
acquiring internal scene images of different layered object areas;
acquiring reference alignment positions of fixed reference objects in the internal view images of different layered object areas; the storage room is internally provided with a storage partition board and a fixed reference object, the storage partition board divides the storage room into at least two storage areas, and the fixed reference object longitudinally spans the storage areas in all layers in the storage room;
carrying out alignment correction on the internal scene images of different layered object areas according to the reference alignment position; and the internal view images of the different layered object areas after the alignment correction are displayed by a display arranged on the outer side of the box door.
According to the technical scheme, the display is arranged to display the internal scene images of the aligned and corrected different layer object placing areas, the user can check the food materials in the food material storage device storage chamber through the display without opening or closing the food material storage device door, convenience for the user to check the food materials in the food material storage device storage chamber is improved, and reduction of power consumption of the food material storage device is facilitated. In addition, by utilizing the fixed reference objects longitudinally spanning all the layered object areas in the storage room, the reference alignment positions of the fixed reference objects in the internal scene images of different layered object areas are obtained by obtaining the internal scene images of different layered object areas, and the internal scene images of different layered object areas are aligned and corrected according to the reference alignment positions, so that the relative positions of the fixed reference objects longitudinally spanning all the layered object areas after imaging are consistent, and the corresponding parts of the fixed reference objects longitudinally spanning all the layered object areas in the internal scene images of different layered object areas are aligned, thereby solving the problem of the fault of the internal scene images of different layered object areas and optimizing the imaging display effect of the internal scene images of the storage room.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic perspective view of a food material storage device according to an exemplary embodiment of the present application;
fig. 2 is an external view of a food material storage device according to an exemplary embodiment of the present application;
fig. 3 is a schematic diagram of a hardware configuration of a food material storage device according to an exemplary embodiment of the present application;
FIG. 4 is a flow diagram illustrating an image processing method according to an exemplary embodiment of the present application;
FIG. 5 is a schematic illustration of an interior view image of different layered object regions after gray scale processing according to an exemplary embodiment of the present application;
fig. 6(a) to 6(c) are schematic diagrams illustrating an intra-scene image width adjustment process corresponding to left-alignment correction according to an exemplary embodiment of the present application;
7(a) -7 (c) are schematic diagrams of an intra image width adjustment process corresponding to a right registration correction shown in the present application according to an exemplary embodiment;
fig. 8 is a schematic illustration of an internal view image of different layered object areas after alignment correction according to an exemplary embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the present application.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Fig. 1 is a schematic perspective view of a food material storage device according to an exemplary embodiment of the present application. The food material storage device may be used for storing various different kinds of food materials, and may be, for example, a storage device such as a refrigerator, an ice chest, or a food material display cabinet. Taking a food material storage device as an example of a refrigerator, in an example shown in fig. 1, the refrigerator provided by the embodiment of the application is approximately rectangular in shape. The refrigerator includes a storage chamber 110 and a door, i.e., the refrigerator is defined by the storage chamber 110 providing a limited storage space for storing food materials and the door provided at an opening of the storage chamber 110. The storage chamber 110 is a box body having an opening, a storage partition 160 is provided in the storage chamber 110, and the storage partition 160 divides the storage chamber 110 into at least two storage areas. In the example shown in fig. 1, the storage room 110 includes a freezing compartment 111 located below and a refrigerating compartment 112 located above, the freezing compartment 111 and the refrigerating compartment 112 respectively have one or more independent storage spaces, such as a drawer-type compartment 111A included in the freezing compartment 111, and a multi-layer storage region 1121-1123 included in the refrigerating compartment 112, one storage region may be a receiving compartment, and the top illumination lamp 140 is located inside the receiving compartment 1121.
In the example shown in fig. 1, the plurality of drawer compartments 111A are covered by drawer doors 111B. The refrigerating chamber 112 is divided into left and right sides, which are respectively covered by a door 112B pivotably mounted on the cabinet.
It should be noted that fig. 1 is an example of an embodiment of the present application, and the illustrated refrigerator does not constitute a limitation to the refrigerator provided in the present application. For example, in further embodiments of the present application, the refrigerator cabinet is divided into left and right sides, defining the left side cabinet as the freezer compartment and the right side cabinet as the refrigerator compartment, both of which may be covered by a door pivotally mounted on the cabinet. As another example, in further embodiments of the present application, the storage compartments include a refrigeration compartment, a freezer compartment, and a temperature change compartment.
Based on the food material storage device provided by the above embodiment, the plurality of cameras are arranged inside the box of the food material storage device, so as to acquire the internal view images of the material areas 1121 and 1123 in different layers in the storage chamber 110. In some embodiments, the camera may be disposed at a side of the door located inside the storage room 110, and the lens is disposed facing the storage room 110. Thus, when the door is closed, the camera will be directly opposite to the inside of the storage room 110, so as to capture the interior images of the different layer of the material areas 1121 and 1123 inside the storage room 110. It should be understood that a plurality of lamps are provided on the inner wall of the storage compartment 110, including but not limited to a top lamp 140 provided on the top wall of the storage compartment 110 and a side backlight provided on the side wall of the storage compartment 110. In order to provide sufficient light conditions for the camera to capture images of the interior of the storage compartment 110, the overhead illumination lamp 140 is kept on while the side backlight is turned off during operation of the camera.
In the example shown in fig. 1, a camera mounting plate 130 is disposed at one side edge of the refrigerating compartment door 112B, the camera may include cameras 131, 132 and 133, the cameras 131, 132 and 133 are embedded in the camera mounting plate 130, and the cameras 131, 132 and 133 are all connected to a controller (see, in particular, the controller 220 and the description thereof in fig. 2) of the food storage device to operate under the control of the controller. The cameras are disposed corresponding to the storage areas one by one, for example, when the door 112B of the refrigerating compartment is closed, the cameras 131, 132 and 133 face the storage areas 1121, 1122 and 1123 included in the refrigerating compartment 112, so that the cameras 131, 132 and 133 can be used for collecting the interior images of the storage areas 1121, 1122 and 1123, respectively, that is, the cameras can collect the interior images of the storage areas at different levels in the storage compartment when the door is closed.
It should be noted that fig. 1 is an example of the embodiment of the present application, and the number, the installation position, and the installation manner of the cameras shown in the embodiment do not limit the food storage device provided in the present application. For example, in another embodiment of the present application, a camera mounting plate 130 is disposed at an edge of one side of the refrigerating compartment door 112B close to the freezing compartment door 111B, a mechanical sliding rail and a driving motor are disposed in the camera mounting plate 130, the camera is slidably connected to the mechanical sliding rail through a slider, and the driving motor is connected to the slider, so that the camera can slide in a vertical direction along the mechanical sliding rail under the driving of the driving motor. The controller is connected with the driving motor. When the internal view image of the refrigerating chamber needs to be collected, firstly, the camera is moved to the position right opposite to the object placing area on the first layer (or the third layer) by controlling the driving motor, then the camera is controlled to collect the internal view image of the object placing area, after the collection is finished, the camera is moved to the position right opposite to the object placing area on the second layer by controlling the driving motor, and then the camera is controlled to collect the internal view image of the object placing area until all the internal view images of the object placing area are collected.
Based on the food material storage device provided by the embodiment, the display is arranged on the outer side of the door of the food material storage device, so that the food material storage device has a display function.
Fig. 2 is an appearance schematic diagram of a food material storage device according to an exemplary embodiment of the present application. Taking the food storage device as an example of a refrigerator, in the example shown in fig. 2, a display may be provided on refrigerating chamber door 112B, and the display may be embedded in a door body. A tag reading area is provided in an area of the refrigerating compartment door 112B below the display, and an antenna for reading an RFID (Radio Frequency Identification) tag is built in the tag reading area to identify a short-distance RFID tag.
In some embodiments, the display screen is positioned in a plane that is flush with the outer surface of the door.
In the example shown in fig. 1 and 2, the food storage device provided by the embodiment of the present application, for example, a refrigerator, has at least two storage compartments, such as a freezing compartment, a refrigerating compartment, and a temperature-changing compartment, disposed inside a cabinet. Each storage room can be provided with a plurality of independent storage spaces, such as drawer type compartments and accommodating grids.
Fig. 3 is a schematic hardware configuration diagram of a food material storage device according to an exemplary embodiment of the present application. In the example shown in fig. 3, the food storage device 200 may comprise at least one of a display 210, a controller 220, an antenna 230 for detecting RFID tags, a detector 240, a camera 250, a speaker 260, a memory 270, and a user input interface. The display 210, antenna 230, detector 240, camera 250, speaker 260, and memory 270 are coupled to the controller 220 through a communication interface.
The display 210 is configured to receive the image signal output by the controller 220, display video content and images and a component of a menu control Interface, and display a control UI (User Interface) for controlling the food storage device 200.
The controller 220 may include one or more processing units, such as a system on a chip (SoC), a Central Processing Unit (CPU), a Microcontroller (MCU), a memory controller, and the like. The different processing units may be separate devices or may be integrated into one or more processors.
In some embodiments, controller 220 communicates with antenna 230 via a serial port.
In some embodiments, the controller 220 includes an RFID module, and the RFID module, the antenna 230 and the RFID tag form an RFID read/write system, the antenna 230 is used for transmitting radio frequency signals between the RFID module and the RFID tag, and the RFID module completes read/write operations on the RFID tag through the antenna 230.
In some embodiments, the RFID module communicates with the antenna through a serial port.
The memory 270 may include one or more memory units, for example, may include a volatile memory (volatile memory), such as: dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), and the like; non-volatile memory (NVM) may also be included, such as: read-only memory (ROM), flash memory (flash memory), and the like. The different memory units may be separate devices, or may be integrated or packaged in one or more processors or communication interfaces, and become a part of the processors or communication interfaces.
Stored in memory 270 are program instructions and applications. The controller 220 may call the program instructions in the memory 270 or run the application program to enable the food storage device to execute a related method, such as the method provided in the embodiments of the present application.
The camera 250 is used for acquiring internal view images of different layered object areas in the storage room, including but not limited to acquiring internal view images, for example, internal view video can also be acquired. In some embodiments, a wide-angle lens may be used for the cameras in camera 250. The wide-angle lens has short focal length, so that a large-area storeroom inner scene can be shot in a short shooting distance range. In some embodiments, the camera 250 may be a normal wide-angle lens with a focal length of 38-24 mm and a viewing angle of 60-84 degrees, or an ultra-wide-angle lens with a focal length of 20-13 mm and a viewing angle of 94-118 degrees.
The detector 240, including at least a sound collector such as a microphone, may be used to receive the user's voice. Illustratively, a voice password used by the user for controlling the food material storage device 200 is collected through a microphone. The detector 240 may further include a door opening and closing state sensor, such as a detection magnetic sensitive switch, a mechanical switch, for detecting an opening and closing signal of each door, and transmitting opening and closing state information to the controller 220 while recording the opening and closing state of the door according to the detected opening and closing signal. Illustratively, when the detector 240 detects an opening signal or a closing signal of any one of the doors, the latest switch state information is transmitted to the controller 220.
A user input interface including at least one of a microphone, a touchpad, a sensor, a key, etc. input interfaces. Such as: the user can input a user command through voice, touch, gesture, pressing, and the like, and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding command signal, and sends the command signal to the controller 220.
In some embodiments, the controller 220 may include a voice recognition module, and the voice recognition module further includes a voice parsing unit and a voice instruction database, so that the food material storage device can independently perform voice recognition on voice data input by a user and match the recognized voice content with voice instructions in the voice instruction database.
The number of antennas 230 is not limited in this application. For example, the antenna 230 may include an antenna provided in each of the storage compartments and an antenna provided inside the tag reading area, wherein the antenna provided in each of the storage compartments is mainly used to scan the RFID tag in each of the storage compartments to read the tag information of the RFID tag, and the antenna provided inside the tag reading area is mainly used to scan the RFID tag on the tag reading area to read the tag information of the RFID tag.
In some embodiments, the power of the antennas provided in different storage compartments may be different. And the power of the antenna of each storage room can ensure that the antenna can identify the food materials in the self room, thereby avoiding that the error identification rate is increased when the power is too high and the food materials in the incomplete self room can be identified when the power is too low. In a specific implementation, the power of the antenna may be determined in advance according to the size of the space of each storage room, and the power of each antenna is set in the food material storage device, and the larger the space is, the larger the power is.
In the example shown in fig. 1, when the refrigerating chamber door 112B is closed, the cameras 131, 132, and 133 respectively face the object placing regions 1121, 1122, and 1123 included in the refrigerating chamber 112, the cameras 131, 132, and 133 can be used to collect the internal view images of the object placing regions 1121, 1122, and 1123, one layer of object placing region is a holding grid in the storage chamber 110, that is, the camera 250 can collect the internal view images of different layer of object placing regions in the storage chamber when the door is closed, the controller 220 can splice the internal view images of different layer of object placing regions acquired by different cameras, and the display 210 is used to receive the spliced internal view images of different layer of object placing regions in the storage chamber output by the controller 220 and display the spliced internal view images to show the actual internal view in the storage chamber.
Alternatively, the controller 220 may also directly transmit the internal view images of different object areas acquired by different cameras to the display 210, and the display 210 displays the internal view images of different object areas respectively. For example, the display 210 may be a touch-control integrated display, and the user may select the internal view image of the one or more layers of the object region to be viewed through a touch operation on the display 210. Fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present application. In some embodiments, controller 220 performs the following steps shown in fig. 4:
s310, internal scene images of different layered object areas are obtained.
In the example shown in fig. 1, to obtain the internal view images of different storage areas in the storage room, a camera panel may be embedded in the left or right food storage device door, for example, three cameras 131, 132, and 133 are embedded in the camera panel from top to bottom, the heights of the three cameras 131, 132, and 133 correspond to the three storage areas 1121, 1122, and 1123 in the food storage device, respectively, and each camera is used for acquiring the internal view image of the corresponding storage area. After the user finishes placing or taking out the food material and closes the food material storage device door, the top illuminating lamp 140 of the food material storage device can be delayed for a certain time, for example, the top illuminating lamp can be closed within 20s, and a good light environment is provided for the camera to take a picture. In order to prevent the camera from taking photos, the photos have the problem of unclear light halo, and the backlight lamp of the food material storage equipment can be turned off immediately after the door of the food material storage equipment is closed. Meanwhile, after the controller learns that the refrigerating chamber of the food material storage device is closed, a photographing process is initiated, namely, each camera is triggered according to a certain sequence to photograph the internal scenes of the three-layer object placing areas in the food material storage device, and the triggering sequence of the cameras can be set corresponding to the turn-off sequence of the backlight lamps.
In some embodiments, the camera photographs different layer object areas in the storage room, and transmits internal images of the different layer object areas to the controller, and after the controller acquires the internal images of the different layer object areas, the controller performs gray processing on the internal images of the different layer object areas. Fig. 5 is a schematic diagram of an internal view image of different layered object areas after gray scale processing according to an exemplary embodiment of the present application. In fig. 5, three images of the interior from the top to the bottom are the first layer 1121 to the third layer 1123 of the storage room.
At present, in the intelligent food storage device product with the screen, the user can check the food information in the food storage device conveniently, the power consumption of opening and closing the food storage device door is reduced, and the intelligent photographing function is added to the food storage device. After the user closes the food storage device door, the food storage device screen camera service can call a plurality of built-in cameras of the food storage device to shoot different layer object areas in the food storage device, and finally, a plurality of shot pictures are displayed on the food storage device screen, so that the user can know the food information in the food storage device without opening the food storage device door again. In the example shown in fig. 5, when multiple cameras are used to photograph different object areas on different layers, there may be tolerance in the positions of the photosensors of the different cameras on the circuit board, tolerance in the lens mounting position, tolerance in the circuit board and housing assembly, and large tolerance in the positions of the multiple cameras on the elastic plastic vertical partition plate due to heating or extrusion, the factors finally cause different deviations of shooting angles of different cameras, the consistency of relative positions of objects longitudinally spanning all shelves in the food material storage equipment is difficult to ensure after imaging, the internal images of different shelf areas have fault problems, for example, the back panel 150 shown in fig. 1 and 5 longitudinally spans each layer object area, as shown in fig. 5, the left and right boundaries of the back panel 150 are obviously not aligned, and the back panel 150 has a fault problem in the internal view images of different layer object areas, which affects the display effect of the final internal view image.
And S320, acquiring the reference alignment position of the fixed reference object in the internal view images of the different layered object areas.
The fixed reference object is arranged inside the storage chamber, the fixed reference object longitudinally spans each layer of object placing area in the storage chamber, namely, each layer of object placing area is provided with a part of fixed reference object, and the fixed reference object is arranged along each layer of object placing area in the vertical direction corresponding to the storage chamber. In some embodiments, the reference alignment positions of the fixed reference objects in the internal view images of the different layered object areas are obtained, and the positions of the fixed reference objects, which are located on the same straight line in the vertical direction and located in the different layered object areas, can be determined as the reference alignment positions.
For example, the fixed reference object may be a marking line, the marking line extends in a vertical direction, a central position of a marking line portion corresponding to the arrangement of the different layer object areas may be set as a reference alignment position, and specifically, a central position of a marking line portion corresponding to the arrangement of the one layer object area, a central position of a marking line portion corresponding to the arrangement of the two layer object area, and a central position of a marking line portion corresponding to the arrangement of the three layer object area may be set as reference alignment positions of the fixed reference object in the intra-view image of the different layer object areas, respectively.
It should be noted that, non-center positions, which are located on the same straight line along the vertical direction and located in different layered object areas, on the marking line may also be set as reference alignment positions, which is not specifically limited in this embodiment of the application.
For example, the fixed reference object may also be a solid structure in the food material storage device, in the example shown in fig. 1, the fixed reference object may be a back plate 150, the back plate 150 is a monolithic plate-shaped structure disposed inside the storage chamber, the back plate 150 longitudinally spans each layered object area in the storage chamber, positions of the back plates 150, which are located on the same straight line along the vertical direction and located in different layered object areas, may be set as reference alignment positions, for example, the central position of the back plate 150 in the inside view images of different layered object areas may be set as a reference alignment position.
It should be noted that, it is also possible to set a non-central position, which is located on the same straight line in the vertical direction and located in different layered object areas, on the back plate 150 as a reference alignment position, which is not specifically limited in this embodiment of the application. In the example shown in fig. 5, it can be visually seen that the reference alignment positions on the backboard 150 of the internal view images of the different layer object areas corresponding to the three drawings are not aligned, for example, the center position of the backboard 150 in the internal view images of the different layer object areas is not aligned, and the backboard 150 has a fault problem in the internal view images of the different layer object areas, which affects the display effect of the final internal view image.
It should be noted that, in the above embodiments, the fixing reference is only described by taking the marking line and the back plate as examples, and the specific implementation form of the fixing reference is not limited, and any structure or line of each layer of storage area longitudinally spanning the storage compartment may be used as the fixing reference in the embodiments of the present application.
In some embodiments, the fixed reference is the back plate 150 disposed inside the storage compartment, the reference alignment position of the fixed reference in the internal view images of the different layered object areas is obtained, the left and right boundaries of the back plate 150 in the internal view images of the different layered object areas may be obtained first, then the center position of the back plate 150 is obtained according to the left and right boundaries, the internal view images of the different layered object areas are aligned and corrected according to the reference alignment position, the alignment and correction may be performed on the internal view images of the different layered object areas according to the reference alignment position so as to longitudinally align the center positions of the back plate 150, that is, the alignment and correction may be performed on the center positions of the back plate 150 in the internal view images of the different layered object areas so as to longitudinally align the center positions of the back plate 150.
In some embodiments, the left and right boundaries of the back panel 150 in the internal view images of different layer object areas may be obtained as vertical lines, where the number of non-zero pixel points included in the internal view images of different layer object areas is greater than or equal to a set threshold. For example, a hough transform algorithm may be used to obtain vertical lines in which the number of non-zero pixel points included in the internal view images of different layer object areas is greater than or equal to a set threshold.
Specifically, the hough transform algorithm can separate geometric shapes with certain same characteristics from the image, for example, geometric shapes such as straight lines or circles in the image at the separation position, and can effectively reduce noise interference. The left and right borders of the backboard 150 should be displayed as vertical lines in the inside view image, in order to obtain the left and right borders of the backboard 150 in the inside view image of the object placing areas of different layers, the embodiment of the application needs to separate the vertical lines in the inside view image of the object placing areas of each layer by using a hough transform algorithm, and the specific process is as follows:
and converting the rectangular coordinate system into a polar coordinate system. For any point a (X0, Y0) in the rectangular coordinate system, the straight line passing through the point a satisfies the equation Y0 ═ k × X0+ b, where k is the slope b is the intercept, and the cluster of straight lines passing through the point a (X0, Y0) in the X-Y plane can be represented by Y0 ═ k × X0+ b, but for the vertical line, the slope thereof is infinite, so the vertical line cannot be represented by the rectangular coordinate system, and the rectangular coordinate system needs to be converted to the polar coordinate system.
The equation representing a straight line in a polar coordinate system is ρ ═ xcos (θ) + ysin (θ), where ρ is the distance from the origin to the straight line and θ is the polar angle of the corresponding point. In the polar coordinates, rho-xcos (theta) + ysin (theta) represents a straight line, wherein (rho, theta) determines a straight line, so that each nonzero pixel point on the internal scene image of each layer of object placing area can be converted into a straight line in the polar coordinates, then a hist (rho, theta) two-dimensional histogram is constructed, the point where theta is equal to pi/2 in the two-dimensional histogram is located on a vertical straight line, and the number of nonzero pixel points on the vertical line can be determined according to the value of rho. The vertical lines with the number of the non-zero pixel points being greater than or equal to the set threshold are found in the two-dimensional histogram, namely the vertical lines in the internal scene images of the different layer object areas to be detected in the embodiment of the application. For example, the set threshold may be equal to but not limited to 30, that is, the vertical lines including more than 30 nonzero pixels in the internal view images of the different layered object areas are the vertical lines to be confirmed in the embodiment of the present application, and the detection accuracy of the vertical lines in the internal view images of the different layered object areas is effectively improved by limiting the number of the nonzero pixels by the threshold.
The method comprises the steps of obtaining left and right boundaries of a backboard in internal scene images of different layer object areas, and obtaining the left and right boundaries of the backboard according to the vertical distance between a vertical line and a horizontal central pixel point of the internal scene image after obtaining the vertical line with the number of non-zero pixels in the internal scene images of the different layer object areas being larger than or equal to a set threshold value. And the vertical linear strip which is positioned on the left side of the transverse central pixel point and has the minimum vertical distance with the transverse central pixel point is determined as the left boundary of the backboard, and the vertical linear strip which is positioned on the right side of the transverse central pixel point and has the minimum vertical distance with the transverse central pixel point is determined as the right boundary of the backboard.
In the example shown in fig. 5, the detected vertical white lines on the left and right sides of the backboard 150 are vertical lines with the number of non-zero pixel points included in the internal view images of different layer object areas being greater than or equal to a set threshold, for example, two vertical lines are detected on the left and right sides of the backboard 150 in the internal view image of the first layer object area, four vertical lines are detected on the left side of the backboard 150 in the internal view image of the second layer object area, two vertical lines are detected on the right side of the backboard 150 in the internal view image of the third layer object area, three vertical lines are detected on the left side of the backboard 150 in the internal view image of the third layer object area, and two vertical lines are detected on the right side of the backboard 150 in the internal view image of the third layer object area. In the example shown in fig. 5, it is difficult to confirm the left and right boundaries of the backboard 150 in the inside view image of the different layered object areas, from either the left side to the right side or the right side to the left side.
In the example shown in fig. 5, a horizontal center pixel point (100, width/2) of the internal scene image may be determined first, where the height of the horizontal center pixel point may be, but is not limited to, the 100 th line in the internal scene image, and the width is the width of the internal scene image in the example shown in fig. 5, and then the horizontal center pixel point a of the internal scene image is searched for the left and right sides of the internal scene image respectively. Illustratively, the first vertical lines found in two directions are respectively used as the left and right boundaries of the backboard 150, that is, the second vertical line from the left in the inside view image of the first story storage area is determined as the left boundary of the backboard 150, and the second vertical line from the right is used as the right boundary of the backboard 150. The fourth vertical line from the left in the second layer object region interior image is determined as the left boundary of the backboard 150, the second vertical line from the right is determined as the right boundary of the backboard 150, the third vertical line from the left in the third layer object region interior image is determined as the left boundary of the backboard 150, and the second vertical line from the right is determined as the right boundary of the backboard 150.
After the left and right boundaries of the backboard 150 in the internal view images of the different layer object areas are obtained, the center position of the backboard 150 is obtained according to the left and right boundaries of the backboard 150, and the center position of the backboard 150 is used as the reference alignment position corresponding to the internal view images of the different layer object areas. Specifically, after the left and right boundaries of the back panel 150 in the inside view images of different layered object areas are obtained, the center position of the back panel 150 in each inside view image can be calculated according to the left and right boundaries of the back panel 150.
For example, it may be set that upliftcol and uplightcol respectively represent the left and right lateral coordinate values of the backboard 150 in the first layer object region interior image, medLeftCol and medRightCol respectively represent the left and right lateral coordinate values of the backboard 150 in the second layer object region interior image, and downLeftCol and downRightCol respectively represent the left and right lateral coordinate values of the backboard 150 in the third layer object region interior image, so that the central position lateral coordinate value upCenter of the backboard 150 in the first layer object region interior image satisfies the following calculation formula:
upCenter=(upLeftCol+upRightCol)/2
the lateral coordinate values upCenter, medCenter and downCenter of the center position of the backboard 150 in the internal view image of the three-layered object area can be obtained by referring to the above calculation formula.
In some embodiments, the controller is further configured to select an area where the backboard in the internal scene image is located to acquire the vertical lines before acquiring the vertical lines of which the number of the nonzero pixel points included in the internal scene image is greater than or equal to the set threshold.
Specifically, a standard hough transform algorithm calculates all pixel points by an exhaustive method, in an example shown in fig. 5, an internal view image of a placement area acquired by a camera includes not only an internal view image of an area where a backboard 150 is located, but also internal view images of areas where other structures in a storage room of food storage equipment are located, for example, internal view images of areas where a part of storage partition 160 is located, and the internal view images of the part of areas also include a plurality of non-zero pixel point areas, but the internal view images of the part of areas have no effect on the calibration of the positions of the internal view images of different placement areas in the embodiment of the present application, and if the hough transform algorithm is still used to process the internal view images of the placement area in the whole area, the calculation process is time-consuming and the calculation efficiency is low.
In order to solve the problem, in the embodiment of the present application, instead of performing the process of obtaining the vertical lines on the internal scene image of the object placing area in the whole area, before obtaining the vertical lines with the number of nonzero pixel points included in the internal scene image being greater than or equal to the set threshold, the area where the backboard 150 is located in the internal scene image is selected. In the example shown in fig. 5, for example, the top 160 rows of the internal view image of the object area from top to bottom may be selected for calculation, that is, only the area of the internal view image where the backboard 150 is located is selected for obtaining the vertical bar. Therefore, the calculation amount of about 2/3 in the vertical line acquisition process can be saved, and the calculation efficiency of the vertical line acquisition process is effectively improved.
It should be noted that, the line 160 in the above embodiment is only used to exemplarily illustrate and distinguish the area where the back plate 150 is located and the area where other structures in the storage room are located, and the number of lines of images corresponding to the area where the back plate 150 is located is not specifically limited in this embodiment of the application, and the number of lines may be adjusted according to the installation position and the shooting angle of the corresponding camera.
And S330, performing alignment correction on the internal scene images of the different layered object areas according to the reference alignment position.
In the example shown in fig. 5, the fixed reference is a back plate 150 disposed inside the storage room, and the alignment correction is performed on the inside view images of the different layered object areas according to the reference alignment position, which may be performed on the center position of the back plate 150 in the inside view images of the different layered object areas.
In some embodiments, the center position of the back plate 150 in the intra-scene images of different layer object areas is aligned and corrected to longitudinally align the center position of the back plate 150, a center position maximum value may be obtained according to the center position of the back plate 150 in the intra-scene images of different layer object areas, a first image clipping width may be determined according to the center position and the center position maximum value of the back plate 150 in the intra-scene images, and a side image in the intra-scene image within a clipping width range corresponding to the first image is clipped.
In some embodiments, the alignment correction performed on the center position of the backboard 150 in the intra-view images of different layer object areas includes left alignment correction or right alignment correction, and the left alignment correction performed on the center position of the backboard 150 in the intra-view images of different layer object areas may be performed by obtaining a minimum center position value according to the center position of the backboard 150 in the intra-view images of different layer object areas, determining a difference between the center position of the backboard 150 in the intra-view image and the minimum center position value as a first image clipping width, and clipping a left side image in the first image clipping width range in the intra-view image.
The center position of the backboard 150 in the internal view images of the different layer object areas is corrected by left alignment, that is, the center position of the backboard 150 is used to align the left boundary of the backboard 150 in the internal view images of the different layer object areas, so that the alignment of the backboard in the internal view images of the different layer object areas is corrected. In the example shown in fig. 5, with reference to the description of the foregoing embodiment, the horizontal coordinate values upCenter, medCenter, and downCenter of the center position of the backboard 150 in the inside view images of the upper, middle, and lower three layer object areas have been obtained, then the minimum value of the center positions among the three values is determined to be minCenter, the difference between the center position of the backboard 150 in the inside view image and the minimum value minCenter of the center positions is determined to be the first image clipping width, the left side image within the first image clipping width in the inside view image is clipped, that is, the calibration parameter of the inside view image of each layer object area is calculated as follows:
upLeftStart=upCenter–minCenter
medLeftStart=medCenter–minCenter
downLeftStart=downCenter–minCenter
wherein, upliftstart, medLeftStart and downliftstart are the left starting positions of the background images of the three-layer object area, i.e. the widths of the left sides of the background images that need to be cut off. In the example shown in fig. 5, the center position medenter of the backboard 150 in the second object region internal view image is the minimum center position minCenter, the difference between the center position corresponding to the first object region and the minimum center position minCenter is also equal to the distance between the first layer and the left boundary of the backboard 150 in the second object region internal view image, and the difference between the center position corresponding to the third object region and the minimum center position minCenter is also equal to the distance between the third layer and the left boundary of the backboard 150 in the second object region internal view image. Therefore, as can be seen from the example shown in fig. 5, by cutting off the image in the difference range between the center position and the minimum value minCenter of the center position from the leftmost side of the first layer and the third layer intra-scene images, the alignment of the left boundary of the backboard in the three-layer object region intra-scene images can be realized, and the alignment correction of the backboard 150 in the three-layer object region intra-scene images can be further realized.
In the example shown in fig. 5, the minimum value minCenter of the center position corresponds to the inside view image of the middle object region, so that the left side image in the up center-minCenter width range can be cut off from the inside view image of the first object region, the left side image in the down center-minCenter width range can be cut off from the inside view image of the third object region, and the alignment of the left boundary of the backboard 150 in the inside view image of the three object regions can be realized without performing the cutting process on the inside view image of the second object region.
Therefore, the alignment of the left boundary of the backboard 150 in the three-layer object placing area internal view image is realized, that is, the alignment correction of the backboard 150 in the three-layer object placing area internal view image is realized, the display 210 is used for receiving the internal view images of different layer object placing areas in the storage room output by the controller 220 and displaying the internal view images of the different layer object placing areas to show the actual internal view in the storage room, so that the relative positions of the fixed reference objects longitudinally spanning the layer object placing areas in the internal view images of the different layer object placing areas displayed by the display 210 after imaging are consistent, and the corresponding parts of the fixed reference objects longitudinally spanning the layer object placing areas in the internal view images of the different layer object placing areas are aligned, thereby solving the problem of the fault display of the internal view images of the different layer object placing areas and optimizing the imaging display effect of the internal view images of the storage room.
In some embodiments, the right alignment correction is performed on the center position of the backboard 150 in the intra-view images of different layer object areas, which may be to obtain a maximum center position value according to the center position of the backboard 150 in the intra-view images of different layer object areas, determine a difference between the maximum center position value and the center position of the backboard 150 in the intra-view image as a first image capturing width, and capture a right-side image in the first image capturing width range in the intra-view image.
The center position of the backboard 150 in the inside view images of the different layer object areas is right-aligned, that is, the center position of the backboard 150 is used to align the right boundary of the backboard 150 in the inside view images of the different layer object areas to realize the alignment correction of the backboard 150 in the inside view images of the different layer object areas. In the example shown in fig. 5, with reference to the description of the foregoing embodiment, the horizontal coordinate values upCenter, medCenter, and downCenter of the center position of the back plate 150 in the intra-scene images of the upper, middle, and lower three layer object areas have been obtained, then the maximum value of the center position among the three values is determined to be maxCenter, the difference between the maximum value of the center position maxCenter and the center position of the back plate 150 is determined to be the first image truncation width, the right-side image within the range of the first image truncation width in the intra-scene image is truncated, that is, the calibration parameter of the intra-scene image of each layer object area is calculated as follows:
upRightStart=maxCenter–upCenter
medRightStart=maxCenter–medCenter
downRightStart=maxCenter–downCenter
the upRightStart, medRightStart and downRightStart are the right start positions of the inner image of the three-layer object area, i.e. the widths of the right side of the inner image to be cut. In the example shown in fig. 5, the central position medenter of the back panel 150 in the internal view image of the third layer object placing region is the maximum central position maxCenter, the difference between the maximum central position maxCenter and the central position corresponding to the first layer object placing region is also equal to the distance between the first layer and the right boundary of the back panel 150 in the internal view image of the third layer object placing region, and the difference between the maximum central position maxCenter and the central position corresponding to the second layer object placing region is also equal to the distance between the second layer and the right boundary of the back panel 150 in the internal view image of the third layer object placing region. Therefore, as can be seen from the example shown in fig. 5, the alignment of the right boundary of the back plate in the three-layer object region internal view image can be realized by cutting off the image with the maximum value of the central position within the difference range between maxCenter and the central position from the rightmost side of the internal view image, and the alignment correction of the back plate 150 in the three-layer object region internal view image is further realized.
In the example shown in fig. 5, the maximum value of the center position is maxCenter corresponding to the inside image of the third placement area, so that the right-side image in the maxCenter-upCenter width range can be cut off from the inside image of the first placement area, the right-side image in the maxCenter-medenter width range can be cut off from the inside image of the second placement area, and the alignment of the right boundary of the backboard 150 in the inside image of the three placement area can be realized without performing cutting processing on the inside image of the third placement area.
Therefore, the alignment of the right boundary of the backboard 150 in the three-layer object placing area internal view image is realized, that is, the alignment correction of the backboard 150 in the three-layer object placing area internal view image is realized, the display 210 is used for receiving the internal view images of different layer object placing areas in the storage room output by the controller 220 and displaying the internal view images of the different layer object placing areas to show the actual internal view in the storage room, so that the relative positions of the fixed reference objects longitudinally spanning the layer object placing areas in the internal view images of the different layer object placing areas displayed by the display 210 after imaging are consistent, and the corresponding parts of the fixed reference objects longitudinally spanning the layer object placing areas in the internal view images of the different layer object placing areas are aligned, thereby solving the problem of the fault display of the internal view images of the different layer object placing areas and optimizing the imaging display effect of the internal view images of the storage room.
In some embodiments, the controller is further configured to perform the clipping process on the inside view image according to a minimum width among the widths of the inside view images of the different layered object areas after performing the alignment correction on the center position of the back plate 150 in the inside view images of the different layered object areas.
In some implementations, the clipping processing is performed on the internal view image according to the minimum width in the widths of the internal view images of the different layered object areas, which may be to clip the side image of the internal view image according to the original width of the internal view image, the maximum value of the center position, and the minimum value of the center position; the widths of the internal scene images of the different layered object areas after the interception processing are all equal to the minimum width, and the minimum width is equal to the sum of the result of subtracting the maximum value of the central position from the original width of the internal scene images and the minimum value of the central position.
Specifically, after the center position of the back plate 150 in the internal view images of different layered object areas is aligned and corrected, the internal view images of different layered object areas are captured in different width ranges, so that the widths of the internal view images of the different layered object areas after being captured are different, in order to ensure that the widths of the internal view images of the different layered object areas finally displayed by the display 210 are equal, the widths of the internal view images of the different layered object areas finally displayed by the display 210 need to be recalculated, and the width of the internal view image of each layered object area finally displayed by the display 210 satisfies the following calculation formula:
newPicWidth=width–maxCenter+minCenter
wherein newPicWidth is the width of the intra-scene image of each layer of object area finally displayed by the display 210, width is the width of the original intra-scene image before any processing is performed, maxCenter is the maximum value of the center position of the backboard 150 in the intra-scene image, and minCenter is the minimum value of the center position of the backboard 150 in the intra-scene image.
Fig. 6(a) to 6(c) are schematic diagrams illustrating an intra-scene image width adjustment process corresponding to left-alignment correction according to an exemplary embodiment of the present application. Fig. 6(a) shows an internal view image of the three-layered object region before any processing is performed, where 150 denotes a backboard and 170 denotes an internal view image of the entire object region. The width of the back panel 150 and the width of the entire parcel shelf interior image 170 are exemplarily shown in fig. 6 (a). Fig. 6(b) shows an internal view image of the three-layered object placement region after left-side alignment correction is performed by capturing the left image of the internal view image, and fig. 6(c) shows an internal view image of the three-layered object placement region after the internal view images of the three-layered object placement regions are adjusted to be uniform to the minimum width.
In the example shown in FIG. 6(a), with the left boundary as the abscissa zero, upCenter equals 9a, medCenter equals 6a, downCenter equals 11a, minCenter equals 6a, and maxCenter equals 11 a.
In the example shown in fig. 6(b), referring to the above embodiment, to align the left boundaries of the back plate 150 in the internal view images of different object areas, the internal view image of the second object area is kept unchanged, the leftmost internal view image within the width range of 3a, from the upCenter minus minCenter, is cut off from the internal view image of the first object area, and the leftmost internal view image within the width range of 5a, from the internal view image of the third object area is cut off, where the minimum width in the internal view image width of different object areas is the width of the internal view image of the third object area, and is 13a, which is the sum of the original width 7a of the internal view image minus the maximum value 11a of the center position and the minimum value 6a of the center position.
In the example shown in fig. 6(c), the widths of the internal view images of all the layered object areas need to be adjusted to the minimum width 13a, the internal view image of the third layered object area does not need to be cut, the rightmost internal view image within the width range of 2a in the internal view image of the first layered object area is cut, so that the width of the internal view image of the first layered object area is equal to the minimum width 13a, and the rightmost internal view image within the width range of 5a in the internal view image of the second layered object area is cut, so that the width of the internal view image of the second layered object area is equal to the minimum width 13 a. Therefore, on the premise of ensuring that the back plates 150 in the internal scene images of the different object placing areas are aligned, the widths of the internal scene images of the different object placing areas are unified, so that the internal scene images of the different object placing areas are regular images with the same width.
Fig. 7(a) to 7(c) are schematic diagrams illustrating an intra image width adjustment process corresponding to right alignment correction according to an exemplary embodiment of the present application. Fig. 7(a) shows an internal view image of the three-layered object region before any processing is performed, and fig. 7(a) exemplarily shows a width of the rear panel 150 and a width of the internal view image 170 of the entire object region. Fig. 7(b) shows an internal view image of the three-layered object placement region after right alignment correction is performed by capturing a right image of the internal view image, and fig. 7(c) shows an internal view image of the three-layered object placement region after the internal view images of the three-layered object placement regions are adjusted to be uniform to the minimum width.
In the example shown in FIG. 7(a), with the left boundary as the abscissa zero, upCenter equals 9a, medCenter equals 6a, downCenter equals 11a, minCenter equals 6a, and maxCenter equals 11 a.
In the example shown in fig. 7(b), referring to the above embodiment, to achieve the right boundary alignment of the backboard 150 in the intra-images of different object areas, the intra-image of the third object area is kept unchanged, the rightmost intra-image in the width range of 2a, which is the maxCenter minus upCenter, in the intra-image of the first object area is cut off, and the rightmost intra-image in the width range of 5a, which is the maxCenter minus medCenter, in the intra-image of the second object area is cut off, at this time, the minimum width in the intra-image width of different object areas is the width of the intra-image of the second object area, which is 13a, which is the sum of the original width of the intra-image, 7a, which is the result of subtracting the maximum value of the center position, 11a, and the minimum value of the center position, 6 a.
In the example shown in fig. 7(c), the widths of the internal images of all the layered object areas need to be adjusted to the minimum width 13a, the internal image of the second layered object area does not need to be cut, the leftmost internal image within the range of 3a in the internal image of the first layered object area is cut so that the width of the internal image of the first layered object area is equal to the minimum width 13a, and the leftmost internal image within the range of 5a in the internal image of the third layered object area is cut so that the width of the internal image of the third layered object area is equal to the minimum width 13a, so that the widths of the internal images of different layered object areas are unified on the premise of ensuring that the backplates 150 in the internal images of different layered object areas are aligned, so that the internal images of different layered object areas finally displayed by the display 210 are regular images with the same width.
It should be noted that the width values in the above embodiments are only examples of values for explaining the alignment correction process and the width unification process, and are not limited to the width of the corresponding structure in the scene image and the width ratio thereof.
In some embodiments, after performing alignment correction on the internal view images of the different object areas according to the reference alignment position, the internal view images of the different object areas after the alignment correction may be merged into a merged internal view image, and the display 210 is configured to display the merged internal view image. Fig. 8 is a schematic illustration of an internal view image of different layered object areas after alignment correction according to an exemplary embodiment of the present application. In the example shown in fig. 8, after the alignment correction and the width unification processing in the above embodiment, the left and right boundaries of the backboard 150 in the internal view images of the different layered object areas are aligned, and the widths of the internal scene images of the different layer object areas are consistent, the controller 220 splices and aligns the corrected internal scene images of the different layer object areas as spliced internal scene images, the display 210 is used for receiving the spliced internal scene images of the different layer object areas in the storage room output by the controller 220, and the display unit is used for displaying the spliced interior image to show the actual interior of the storage room, so that the relative positions of the fixed reference objects longitudinally spanning the object placing areas in the spliced interior image displayed by the display unit 210 after imaging are consistent, and the corresponding parts of the fixed reference objects longitudinally spanning the object placing areas in the spliced image are aligned, thereby solving the problem of fault display after splicing different interior images and optimizing the imaging display effect after splicing the interior image of the storage room.
According to the method, the center of the backboard is calibrated, the internal scene images of the different layered object areas are cut, and finally the imaging alignment of the different layered object areas is guaranteed. The method has the advantages that the internal scene images of the object areas on different layers can be aligned and calibrated before the food storage device leaves the factory, the offset parameters are stored, after the different layers of the object areas are photographed each time after leaving the factory, the photos are aligned and calibrated according to the calibrated parameters, the fact that objects longitudinally crossing the object areas on all layers in the food storage device can keep the same relative position in the last imaging process is guaranteed, and the imaging display effect after splicing the internal scene images of the storage chamber is optimized.
According to the food storage device provided by the above embodiment, an embodiment of the application further provides an image processing method, and an execution main body of the image processing method includes but is not limited to the controller of the food storage device. For a specific implementation of the image processing method, reference may be made to the above-mentioned embodiment, especially the embodiment shown in fig. 4, which is not described herein again.
It should be noted that the technical field to which the image processing method according to the embodiment of the present disclosure is applied is not limited to the field of food storage devices, and the image processing method according to the embodiment of the present disclosure is applied to application scenarios in which different images need to be aligned and corrected so that the different images are aligned along a set direction. For example, in the field of express delivery cabinets, the express delivery cabinet needs to acquire images of different storage areas in the cabinet body and display the images on a display outside the express delivery cabinet, and the image processing method according to the embodiment of the disclosure can be used for realizing alignment correction of the images of the different storage areas in the cabinet body.
In specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments of the method provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments or some portions thereof in the embodiments of the present application.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the food material storage device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for relevant points, refer to the description in the method embodiment.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A food material storage apparatus, comprising;
the refrigerator comprises a refrigerator body, a storage chamber and a fixed reference object, wherein the refrigerator body is internally provided with the storage chamber for storing food materials, the storage chamber is internally provided with the storage partition board and the fixed reference object, the storage chamber is divided into at least two layers of object placing areas by the storage partition board, and the fixed reference object longitudinally spans each layer of object placing area in the storage chamber;
the door is arranged at the opening of the storage chamber;
the cameras are arranged on one side of the box door positioned in the storage chamber, are in one-to-one correspondence with the object placing areas, and are used for acquiring internal images of object placing areas on different layers in the storage chamber when the box door is closed;
a controller connected with the camera and configured to:
acquiring internal scene images of different layered object areas;
acquiring reference alignment positions of the fixed reference objects in the internal view images of the different layered object areas;
carrying out alignment correction on the internal scene images of different layered object areas according to the reference alignment position;
and the display is arranged on the outer side of the box door and used for displaying the internal view images of the aligned different layered object areas.
2. The food material storage apparatus of claim 1, wherein the controller is configured to obtain a reference alignment position of the fixed reference object in the internal view images of the different layered object areas, and specifically comprises:
and determining the positions of the fixed reference objects which are positioned on the same straight line along the vertical direction and are positioned in different layered object areas as the reference alignment positions.
3. The food material storage apparatus of claim 1 or 2, wherein the fixed reference is a back plate provided inside the storage compartment;
the controller is configured to acquire a reference alignment position of the fixed reference object in the intra-scene images of the different layered object areas, and specifically includes:
acquiring left and right boundaries of the backboard in the internal scene images of different layer object areas;
acquiring the central position of the back plate according to the left and right boundaries;
the controller is configured to perform alignment correction on the background images of the different layer object areas according to the reference alignment position, and specifically includes:
and carrying out alignment correction on the background images of different layered object areas according to the reference alignment position so as to longitudinally align the central position of the back plate.
4. The food material storage apparatus of claim 3, wherein the controller is configured to obtain left and right boundaries of the backboard in the inside view images of the different layered object areas, and specifically comprises:
acquiring vertical lines of which the number of nonzero pixel points contained in the internal scene images of the object areas on different layers is greater than or equal to a set threshold;
acquiring left and right boundaries of the backboard according to the vertical distance between the vertical line and the horizontal central pixel point of the internal scene image; the vertical straight line which is located on the left side of the transverse central pixel point and has the smallest vertical distance with the transverse central pixel point is determined as the left boundary of the backboard, and the vertical straight line which is located on the right side of the transverse central pixel point and has the smallest vertical distance with the transverse central pixel point is determined as the right boundary of the backboard.
5. The food material storage device of claim 4, wherein the controller is further configured to:
before the vertical lines with the number of non-zero pixel points in the internal scene image being larger than or equal to a set threshold value are obtained, the area of the backboard in the internal scene image is selected to obtain the vertical lines.
6. The food material storage apparatus of claim 3, wherein the controller is configured to perform alignment correction on the internal view images of the different layered object areas according to the reference alignment position to longitudinally align the center positions of the back plates, and specifically comprises:
acquiring the maximum value of the central position according to the central position of the backboard in the internal scene images of the object areas on different layers;
determining a first image interception width according to the central position of the backboard in the internal scene image and the central position maximum value;
and cutting off the side image in the internal scene image corresponding to the first image within the cut width range.
7. The food material storage apparatus of claim 6, wherein the alignment correction for the center position of the back plate in the inside view images of different layered object areas comprises a left alignment correction or a right alignment correction;
the controller is configured to perform left alignment correction on the center position of the backboard in the internal view images of the different layered object areas, and specifically includes:
acquiring a central position minimum value according to the central position of the backboard in the internal scene images of the object areas on different layers;
determining the difference value between the central position of the backboard in the internal scene image and the minimum value of the central position as the first image intercepting width;
cutting off a left side image in the first image cutting width range in the internal scene image;
the controller is configured to perform right alignment correction on the center position of the backboard in the internal view images of the different layered object areas, and specifically includes:
acquiring the maximum value of the central position according to the central position of the backboard in the internal scene images of the object areas on different layers;
determining the difference value between the maximum value of the central position and the central position of the backboard in the internal scene image as the first image intercepting width;
and cutting off the right side image in the first image cutting width range in the internal scene image.
8. The food material storage device of claim 6, wherein the controller is further configured to:
after the center position of the backboard in the internal view images of different layer object areas is aligned and corrected, the internal view images are intercepted according to the minimum width in the widths of the internal view images of the different layer object areas.
9. The food material storage apparatus of claim 8, wherein the controller is configured to perform the clipping processing on the internal view image according to a minimum width of the internal view image widths of the different layered object areas, and specifically comprises:
cutting off the side image of the internal scene image according to the original width of the internal scene image, the maximum value of the central position and the minimum value of the central position; and the widths of the internal images of the different layered object areas after the interception processing are all equal to the minimum width, and the minimum width is equal to the sum of the result of subtracting the maximum value of the central position from the original width of the internal image and the minimum value of the central position.
10. An image processing method, comprising:
controlling a camera to collect internal images of different layered object placing areas in a storage chamber of the food storage equipment; the camera is arranged on one side, positioned inside the storage chamber, of the door of the food material storage equipment and is in one-to-one correspondence with the storage areas;
acquiring internal scene images of different layered object areas;
acquiring reference alignment positions of fixed reference objects in the internal view images of different layered object areas; the storage room is internally provided with a storage partition board and a fixed reference object, the storage partition board divides the storage room into at least two storage areas, and the fixed reference object longitudinally spans the storage areas in all layers in the storage room;
carrying out alignment correction on the internal scene images of different layered object areas according to the reference alignment position; and the internal view images of the different layered object areas after the alignment correction are displayed by a display arranged on the outer side of the box door.
CN202111070757.4A 2021-06-23 2021-09-13 Food material storage device and image processing method Pending CN113824864A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202111070757.4A CN113824864A (en) 2021-09-13 2021-09-13 Food material storage device and image processing method
CN202280042647.3A CN117501056A (en) 2021-06-23 2022-02-28 Food material storage device
PCT/CN2022/078407 WO2022267518A1 (en) 2021-06-23 2022-02-28 Food storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111070757.4A CN113824864A (en) 2021-09-13 2021-09-13 Food material storage device and image processing method

Publications (1)

Publication Number Publication Date
CN113824864A true CN113824864A (en) 2021-12-21

Family

ID=78914483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111070757.4A Pending CN113824864A (en) 2021-06-23 2021-09-13 Food material storage device and image processing method

Country Status (1)

Country Link
CN (1) CN113824864A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267518A1 (en) * 2021-06-23 2022-12-29 海信视像科技股份有限公司 Food storage device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105074365A (en) * 2013-03-12 2015-11-18 株式会社东芝 Refrigerator, camera device, refrigerator door pocket, communication terminal, home appliance network system, and interior image display program
CN205580061U (en) * 2015-04-07 2016-09-14 三菱电机株式会社 Refrigerator
US20210166266A1 (en) * 2019-12-02 2021-06-03 Lg Electronics Inc. Artificially intelligent computing device and refrigerator control method using the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105074365A (en) * 2013-03-12 2015-11-18 株式会社东芝 Refrigerator, camera device, refrigerator door pocket, communication terminal, home appliance network system, and interior image display program
CN205580061U (en) * 2015-04-07 2016-09-14 三菱电机株式会社 Refrigerator
US20210166266A1 (en) * 2019-12-02 2021-06-03 Lg Electronics Inc. Artificially intelligent computing device and refrigerator control method using the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267518A1 (en) * 2021-06-23 2022-12-29 海信视像科技股份有限公司 Food storage device

Similar Documents

Publication Publication Date Title
CN105222519B (en) Refrigerator and control method thereof
CA2969482C (en) Method and apparatus for multiple technology depth map acquisition and fusion
US10365034B2 (en) Refrigerator and control method for the same
US10330377B2 (en) Refrigeration appliance comprising a camera module
EP2998671A1 (en) Refrigerator and control method for the same
CN113465287B (en) Intelligent refrigerator and illumination intensity adjusting method
JP6655908B2 (en) Refrigerators and programs
CN111476194B (en) Detection method for working state of sensing module and refrigerator
JP6818407B2 (en) Refrigerator, image management system and program in the refrigerator
CN105704472A (en) Television control method capable of identifying child user and system thereof
DE102013211099A1 (en) Refrigeration unit with a camera module
CN105698482A (en) Determining method for information of stored objects in refrigerator and refrigerator
CN113139402B (en) A kind of refrigerator
DE102013211095A1 (en) Refrigeration device with a door
CN113824864A (en) Food material storage device and image processing method
WO2017112036A2 (en) Detection of shadow regions in image depth data caused by multiple image sensors
DE102013211098A1 (en) Refrigeration unit with a camera module
WO2023185280A1 (en) Method for identifying article information in refrigerator bottle holder, and refrigerator
WO2023185779A1 (en) Method for identifying article information in refrigerator
CN114322409B (en) Refrigerator and method for displaying indoor scenery pictures
WO2022267518A1 (en) Food storage device
CN114294885B (en) Refrigerator and image acquisition method
CN112629110B (en) Refrigerator with a door
WO2023109151A1 (en) Method for identifying information of item in refrigerator, and refrigerator
WO2023073869A1 (en) Door opening angle calculation method and storage unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211221