CN115983758A - Area inspection method, medium, and product - Google Patents

Area inspection method, medium, and product Download PDF

Info

Publication number
CN115983758A
CN115983758A CN202111198422.0A CN202111198422A CN115983758A CN 115983758 A CN115983758 A CN 115983758A CN 202111198422 A CN202111198422 A CN 202111198422A CN 115983758 A CN115983758 A CN 115983758A
Authority
CN
China
Prior art keywords
information
area
image
indication information
cargo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111198422.0A
Other languages
Chinese (zh)
Inventor
方泉川
方裕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuanqi Forest Beijing Food Technology Group Co ltd
Original Assignee
Yuanqi Forest Beijing Food Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuanqi Forest Beijing Food Technology Group Co ltd filed Critical Yuanqi Forest Beijing Food Technology Group Co ltd
Priority to CN202111198422.0A priority Critical patent/CN115983758A/en
Publication of CN115983758A publication Critical patent/CN115983758A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a region inspection method, a medium and a product, wherein the method comprises the following steps: acquiring a plurality of inspection images of an inspected area, image acquisition state information corresponding to each inspection image and depth image information of each inspection image; splicing a plurality of patrol images according to the image acquisition state information and the depth image information of each patrol image to obtain a panoramic image of the inspected area; and acquiring goods indication information according to the panoramic image. This technical scheme can automatic acquisition be used for instructing by the goods indicating information of the type and the position of goods in the detection area, has reduced the human consumption, simultaneously under the prerequisite that does not reduce the goods indicating information rate of accuracy that acquires, makes shared data processing resource less, has improved the efficiency that carries out the detection to the goods in being detected the area.

Description

Area inspection method, medium, and product
Technical Field
The present disclosure relates to the field of control technologies, and in particular, to a region inspection method, medium, and product.
Background
In recent years, when a merchant or a business stores goods, a large number of goods can be stored in a plurality of areas. In order to know the overall storage condition of the goods in time, a merchant or a business needs to frequently check a plurality of areas for storing the goods to acquire corresponding information of the goods stored in different areas.
Disclosure of Invention
The embodiment of the disclosure provides a region inspection method, a medium and a product.
In a first aspect, an embodiment of the present disclosure provides a region inspection method, where the method includes:
acquiring patrol images of a plurality of inspected areas, image acquisition state information corresponding to each patrol image and depth image information of each patrol image, wherein the image acquisition state information is used for indicating the spatial position and the image acquisition direction of area inspection equipment when the patrol images are acquired;
splicing a plurality of patrol images according to the image acquisition state information and the depth image information of each patrol image to obtain a panoramic image of an inspected area;
and acquiring goods indication information according to the panoramic image, wherein the goods indication information is used for indicating the type and the position of at least one goods in the panoramic image.
In one implementation of the present disclosure, acquiring the cargo indication information according to the panoramic image includes:
and acquiring a pre-trained cargo identification model, and inputting the panoramic image into the cargo identification model to acquire cargo indication information output by the cargo identification model.
In one implementation of the present disclosure, the method further comprises:
sending goods indication information and a plurality of inspection area patrol images;
receiving indication information verification information sent by an indication information verification server in response to the cargo indication information and the inspection images of the plurality of inspected areas;
judging whether the indication information check information indicates that the cargo indication information does not meet the indication information check condition;
responding to the indication information verification information that the indication information of the goods does not meet the indication information verification condition, displaying additional recording prompt information, wherein the additional recording prompt information is used for prompting the input of the additional recording information of the goods;
and acquiring cargo additional recording information, and correcting the cargo indication information according to the cargo additional recording information.
In one implementation of the present disclosure, the method further comprises:
and taking the panoramic image as input, taking the corrected cargo indication information as output, and training the cargo identification model.
In one implementation of the present disclosure, before acquiring a plurality of inspection area patrol images, image capture state information corresponding to each patrol image, and depth image information of each patrol image, the method further includes:
acquiring inspection task information, and acquiring position information of at least one inspected area according to the inspection task information;
acquiring position information of the area inspection equipment, and acquiring inspection navigation path information according to the position information of at least one inspected area and the position information of the area inspection equipment;
and displaying and checking navigation path information.
In one implementation of the present disclosure, acquiring a plurality of inspection area patrol images, image acquisition status information corresponding to each patrol image, and depth image information of each patrol image includes:
when the position of the area inspection equipment is matched with the position of at least one inspected area, acquiring a plurality of patrol images of the inspected area, image acquisition state information corresponding to each patrol image and depth image information of each patrol image.
In one implementation of the present disclosure, the method further comprises:
acquiring audio information of the inspected region when the position of the region inspection apparatus matches the position of the at least one inspected region;
and acquiring communication frequency information and language attitude information according to the audio information, and generating interaction state indication information according to the communication frequency information and the language attitude information.
In one implementation of the present disclosure, generating interaction state indication information according to communication frequency information and speech attitude information includes:
and acquiring a pre-trained interaction state recognition model, and inputting communication frequency information and language attitude information into the interaction state recognition model to acquire interaction state indication information output by the interaction state recognition model.
In one implementation of the present disclosure, the method further comprises:
the method comprises the steps of obtaining at least one item of goods display information, area information of an inspected area, shelf indication information and storage equipment indication information according to a panoramic image, wherein the goods display information is used for indicating characters corresponding to at least one goods in the panoramic image, the area information of the inspected area is used for indicating the area of the inspected area, the shelf indication information is used for indicating the type and the position of at least one shelf in the panoramic image, and the storage equipment indication information is used for indicating the type and the position of at least one storage equipment in the panoramic image.
In a second aspect, an area inspection apparatus is provided in an embodiment of the present disclosure.
Specifically, the area inspection apparatus includes:
the inspection system comprises an inspection information acquisition module, an inspection information acquisition module and an inspection information processing module, wherein the inspection information acquisition module is configured to acquire inspection images of a plurality of inspected areas, image acquisition state information corresponding to each inspection image and depth image information of each inspection image, and the image acquisition state information is used for indicating the spatial position and the image acquisition direction of area inspection equipment when the inspection images are acquired;
the panoramic image acquisition module is configured to splice a plurality of patrol images according to the image acquisition state information so as to acquire a panoramic image of the inspected area;
and the inventory unit information acquisition module is configured to acquire goods indication information according to the panoramic image, wherein the goods indication information is used for indicating the type and the position of at least one goods in the panoramic image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a memory, a processor, and a computer program stored on the memory, where the processor executes the computer program to implement the method according to any one of the embodiments of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method according to any one of the embodiments of the first aspect.
In a fifth aspect, an embodiment of the present disclosure provides a computer program product comprising computer instructions that, when executed by a processor, implement the method according to any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the technical scheme, a plurality of inspection area patrol images, image acquisition state information corresponding to each patrol image and depth image information of each patrol image are obtained, wherein the image acquisition state information is used for indicating the spatial position and the image acquisition direction of area inspection equipment when the patrol images are acquired; and then splicing the plurality of patrol images according to the image acquisition state information and the depth image information of each patrol image to acquire a panoramic image of the inspected area, and acquiring goods indication information for indicating the type and the position of at least one goods in the panoramic image according to the panoramic image, wherein the panoramic image of the inspected area can reflect all image information of the goods stored in the inspected area, but the data amount is less than the total data amount of the patrol images of the plurality of inspected areas, so that the technical scheme can automatically acquire goods indication information for indicating the type and the position of the goods in the inspected area, reduce the manpower consumption, simultaneously ensure that the occupied data processing resources are less on the premise of not reducing the accuracy of the acquired goods indication information, and improve the efficiency of detecting the goods in the inspected area.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 shows a schematic structural view of an area inspection apparatus according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a region inspection method according to an embodiment of the present disclosure;
FIG. 3 shows a graphical user interface GUI schematic of a region inspection apparatus according to an embodiment of the present disclosure;
FIG. 4 shows a graphical user interface GUI schematic of a region inspection apparatus according to an embodiment of the present disclosure;
FIG. 5 illustrates an overall flow diagram of a region inspection method according to an embodiment of the present disclosure;
fig. 6 shows a schematic configuration block diagram of a region inspection apparatus control device according to an embodiment of the present disclosure;
FIG. 7 shows a schematic block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 8 is a schematic block diagram of a computer system suitable for implementing a region inspection method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Furthermore, parts that are not relevant to the description of the exemplary embodiments have been omitted from the drawings for the sake of clarity.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, actions, components, parts, or combinations thereof, and do not preclude the possibility that one or more other features, numbers, steps, actions, components, parts, or combinations thereof are present or added.
It should also be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As mentioned above, in recent years, when a merchant or a business stores goods, a large number of goods can be stored in a plurality of areas. For example, in storing beverages or food, etc., a merchant or business may store them at a plurality of off-line locations (e.g., 24 hour convenience stores, retail stores, etc.), where the beverages or food may be placed on off-line merchandise displays, such as shelves.
In order to know the overall storage condition of the goods in time, a merchant or a business needs to frequently check a plurality of areas for storing the goods to acquire corresponding information of the goods stored in different areas.
In one embodiment, different areas for storing goods can be manually inspected by a merchant or a staff of a business, information of the goods stored in the different areas is recorded, and then the recorded information, namely the inspection result, is uploaded, so that the merchant or the business can know the overall storage condition of the goods based on the inspection result. However, in this solution, when the worker checks the different areas for storing the goods manually, it needs to consume more working time, the checking efficiency is low, and the recorded information is easy to be wrong due to manual error, thereby reducing the reliability of the checking result.
In another embodiment, when different areas for storing goods are inspected by a merchant or a staff of an enterprise, the stored goods are photographed by using a mobile communication terminal such as a smart phone, a tablet personal computer and the like, the photographed pictures are uploaded to a cloud, the uploaded pictures are subjected to image recognition by the cloud, and information of the goods stored in the different areas, namely inspection results of the different areas, is acquired according to the image recognition results. In the scheme, on one hand, missed pictures are easy to occur when workers take the pictures, and the uploaded pictures cannot include image information of all goods in the goods storage area; on the other hand, the actions of the workers during photo shooting are difficult to unify, so that the shot photos can not meet the image recognition requirements easily, and the cloud end can not acquire corresponding cargo information according to the uploaded photos. The reliability of the inspection results obtained by this solution is low.
Therefore, when inspecting an area for storing goods, how to improve the reliability of the inspection result without reducing the inspection efficiency is an increasingly urgent problem to be solved.
In view of the above drawbacks, in an embodiment of the present disclosure, a region inspection method is provided, in which inspection images of a plurality of inspected regions, image capture status information corresponding to each inspection image, and depth image information of each inspection image are obtained, where the image capture status information is used to indicate a spatial position and an image capture direction of a region inspection apparatus when the inspection image is captured; and then splicing the plurality of patrol images according to the image acquisition state information and the depth image information of each patrol image to acquire a panoramic image of the inspected area, and acquiring goods indication information for indicating the type and the position of at least one goods in the panoramic image according to the panoramic image, wherein the panoramic image of the inspected area can reflect all image information of the goods stored in the inspected area, but the data amount is less than the total data amount of the patrol images of the plurality of inspected areas, so that the technical scheme can automatically acquire goods indication information for indicating the type and the position of the goods in the inspected area, reduce the manpower consumption, simultaneously ensure that the occupied data processing resources are less on the premise of not reducing the accuracy of the acquired goods indication information, and improve the efficiency of detecting the goods in the inspected area.
The area inspection method provided by the embodiment of the present application may be applied to an area inspection apparatus, for example, the area inspection apparatus in the present application may include but is not limited to: smart phones (e.g., cell phones), laptops, personal computers, tablets, slates, ultrabooks, wearable devices (e.g., smart bracelets, smart watches, smart glasses, head-mounted display devices, etc.), augmented reality devices, mixed reality devices, cellular phones, personal digital assistants, digital broadcast terminals, and the like. Of course, in the following embodiments, the specific form of the area inspection apparatus is not limited at all.
The structure of the area inspection apparatus provided in the present application is exemplified below by taking one specific structure as an example, by way of non-limiting example only. Referring specifically to fig. 1, fig. 1 illustrates a schematic structural view of an area inspection apparatus according to an embodiment of the present disclosure.
As shown in fig. 1, the area inspection apparatus 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation to the area inspection apparatus 100. In other embodiments of the present application, the area inspection apparatus 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an applications processor, a modem processor, a graphics processor, an image signal processor, a controller, a video codec, a digital signal processor, a baseband processor, and/or a neural network processor, among others. The different processing units may be separate devices or may be integrated into one or more processors. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The charging management module 140 is configured to receive charging input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the area inspection device 100. The charging management module 140 may also supply power to the area inspection device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance). In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the area inspection apparatus 100 may be realized by an antenna, a mobile communication module 150, a wireless communication module 160, a modem processor, a baseband processor, and the like.
The antenna is used for transmitting and receiving electromagnetic wave signals. Each antenna in the area inspection device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the area inspection apparatus 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc. The mobile communication module 150 may receive electromagnetic waves from the antenna, filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the area inspection apparatus 100, including wireless local area network, such as wireless fidelity network, bluetooth, global navigation satellite system, frequency modulation, short-range wireless communication technology, infrared technology, and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via an antenna, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. Wireless communication module 160 may also receive signals to be transmitted from processor 110, frequency modulate them, amplify them, and convert them into electromagnetic waves via an antenna for radiation.
The area inspection device 100 may implement display functionality via a graphics processor, a display screen 194, and an application processor, among others. The graphics processor is a microprocessor for image processing, coupled to the display screen 194 and the application processor. The graphics processor is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more graphics processors that execute program instructions to generate or alter display information.
The display screen 194 may be used to display images, video, and the like. The display screen 194 includes a display panel. The display panel can be a liquid crystal display, an organic light emitting diode, an active matrix organic light emitting diode or an active matrix organic light emitting diode, a flexible light emitting diode, a quantum dot light emitting diode, or the like. In some embodiments, the area inspection device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The area inspection apparatus 100 can implement a photographing function through an image signal processor, a camera 193, a video codec, and the like.
An image signal processor may be used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the image signal processor for processing and converting into an image visible to naked eyes. The image signal processor can also carry out algorithm optimization on the noise, brightness and skin color of the image. The image signal processor can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the image signal processor may be provided in the camera 193.
The camera 193 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a charge coupled device or a complementary metal oxide semiconductor phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to an image signal processor to be converted into a digital image signal. The image signal processor outputs the digital image signal to the digital signal processor for processing. The digital signal processor converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, camera 193 may be a camera capable of acquiring depth image information, for example, camera 193 may be a structured light camera, a binocular vision camera, or a time of flight camera. In some embodiments, the area inspection apparatus 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video. The region checking apparatus 100 may support one or more video codecs. In this way, the area inspection apparatus 100 can play or record video in a variety of encoding formats,
the neural network processor processes input information rapidly by referring to a biological neural network structure, for example, by referring to a transmission mode between human brain neurons, and can also learn by self continuously. Applications such as intelligent learning of the regional inspection apparatus 100 can be realized by the neural network processor, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, implementing the memory capability of the extended area inspection device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, a phonebook, etc.) created during use of the region check device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a general flash memory, and the like. The processor 110 executes various functional applications of the area inspection apparatus 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The area inspection apparatus 100 may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor, etc. Such as music playing, recording, etc.
The audio module 170 may be used to convert digital audio information into an analog audio signal for output, and may also be used to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The area inspection apparatus 100 can listen to audio information through the speaker 170A.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the area inspection apparatus 100 receives a call or voice information, it is possible to receive voice by placing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The area inspection apparatus 100 may be provided with at least one microphone 170C. In other embodiments, the area inspection apparatus 100 may be provided with two microphones 170C, which may implement a noise reduction function in addition to collecting sound signals. In other embodiments, three, four or more microphones 170C may be provided in the regional inspection apparatus 100 to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130 or may be a 3.5mm open mobile area inspection equipment platform standard interface, american cellular telecommunications industry association standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The area inspection apparatus 100 acquires the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the area inspection apparatus 100 detects the intensity of the touch operation based on the pressure sensor 180A. The area inspection apparatus 100 can also calculate the position of the touch from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to acquire the motion attitude of the area inspection apparatus 100. In some embodiments, the angular velocities of the area inspection apparatus 100 about three axes may be acquired by the gyro sensor 180B. In some embodiments, the angular velocities of the area inspection apparatus 100 about the nine axes may be acquired by the gyro sensor 180B.
The air pressure sensor 180C may be used to measure air pressure. In some embodiments, the area inspection device 100 calculates altitude from barometric pressure values measured by the barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D may include a hall sensor. The area inspection device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the area inspection apparatus 100 is a folder, the area inspection apparatus 100 may detect the opening and closing of the folder according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E can detect the magnitude of acceleration of the area inspection apparatus 100 in various directions. The magnitude and direction of gravity may be detected when the area inspection apparatus 100 is stationary. But also to identify the pose of the area inspection device. In some embodiments, the acceleration sensor 180E may detect the magnitude of acceleration of the area inspection apparatus 100 in 9 directions.
A distance sensor 180F for measuring a distance. The area inspection apparatus 100 may measure the distance by infrared or laser. In some embodiments, a scene is photographed and the area inspection device 100 may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The area inspection apparatus 100 emits infrared light outward through the light emitting diode. The area inspection apparatus 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the area inspection apparatus 100. When insufficient reflected light is detected, the area inspection apparatus 100 can determine that there is no object near the area inspection apparatus 100. The area inspection device 100 can use the proximity light sensor 180G to detect that the user holds the area inspection device 100 close to the ear for talking, so as to automatically turn off the screen to save power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. The area inspection apparatus 100 may adaptively adjust the brightness of the display screen 194 according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the area inspection device 100 is in a pocket to prevent inadvertent contact.
The fingerprint sensor 180H is used to collect a fingerprint. The area inspection apparatus 100 can perform fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, and the like using the collected fingerprint characteristics.
The temperature sensor 180J may be used to detect temperature. In some embodiments, the area inspection apparatus 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the area inspection apparatus 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the area inspection apparatus 100 heats the battery 142 when the temperature is below another threshold to avoid abnormal shutdown of the area inspection apparatus 100 due to low temperatures. In other embodiments, the area inspection apparatus 100 performs boosting of the output voltage of the battery 142 when the temperature is lower than yet another threshold value to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor may pass the detected touch operation to the application processor to obtain the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the area inspection apparatus 100 at a different position than the display screen 194.
The keys 190 may include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The area inspection apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the area inspection apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration prompts as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenarios (e.g., time alert, receiving information, alarm clock, game, etc.) may also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
It should be noted that, in some embodiments, the area inspection apparatus provided in the embodiments of the present application may be fixed on a helmet, a piece of clothing, or a harness, so that a person wearing the helmet, the piece of clothing, or the harness may carry the area inspection apparatus for movement without holding the area inspection apparatus. Illustratively, the area inspection device may be secured to the person's shoulder or chest by clothing or a harness.
Fig. 2 shows a flowchart of a region inspection method according to an embodiment of the present disclosure, which may be applied to a region inspection apparatus, as shown in fig. 2, the region inspection method including the following steps S101 to S103:
in step S101, a plurality of patrol images of the inspected area, image capture state information corresponding to each patrol image, and depth image information of each patrol image are acquired.
The image acquisition state information is used for indicating the spatial position and the image acquisition direction of the area inspection equipment when the patrol image is acquired.
In an embodiment of the present disclosure, the plurality of inspection images of the inspected area may be a plurality of inspection images obtained by acquiring images of the area inspection equipment in the inspected area for a plurality of times; or the regional inspection equipment may perform video acquisition in the inspected region, and acquire multi-frame images, that is, the plurality of patrol images, from the acquired video.
In an embodiment of the present disclosure, the image capture state information corresponding to each inspection tour image may be obtained by the area inspection equipment according to information captured by the gyroscope sensor and the acceleration sensor in real time in the inspected area. Illustratively, when the area inspection device is located in the inspected area, the motion attitude information is continuously acquired through the gyroscope sensor, the acceleration information is continuously acquired through the acceleration sensor, according to the acquired motion attitude information and the acceleration information, the area inspection device can acquire the relative spatial position and the attitude angle of the area inspection device at any moment when the area inspection device is located in the inspected area, and according to the attitude angle, the direction pointed by the camera of the area inspection device, namely the image acquisition direction, is acquired. The relative spatial position of the area inspection apparatus may be understood as the relative spatial position of the area inspection apparatus with respect to a target coordinate point in the inspected area, and the target coordinate point may be randomly selected or may be specified in advance, for example, the lowest point, the highest point or the central point of the entrance and exit of the inspected area may be specified as the target coordinate point in the inspected area.
In an embodiment of the present disclosure, the depth image information of the patrol image may be used to indicate the distance from each of the objects included in the patrol image to the camera of the area inspection apparatus, that is, the distance from the position of the object in the actual environment to the camera of the area inspection apparatus.
In step S102, according to the image capture state information and the depth image information of each patrol image, a plurality of patrol images are stitched to obtain a panoramic image of the inspected area.
In an embodiment of the present disclosure, the multiple patrol images are spliced according to the image capture state information, and the method may include obtaining a relative coordinate of each patrol image according to the image capture state information, splicing the depth image information of each patrol image into a panoramic depth map according to the relative coordinate, and projecting each patrol image into the panoramic depth map according to a corresponding relationship between the panoramic depth map and each patrol image, so as to obtain the panoramic image.
Before each patrol image is projected into the panoramic depth map according to the corresponding relation between the panoramic depth map and each patrol image, whether a hole area is included in the panoramic depth map can be checked. The hole area may be understood as a location of a pixel having no corresponding coordinate in the coordinate system of the panoramic depth map. The hollow region is generally caused by a shake or an out-of-focus or the like of the region inspection apparatus at the time of acquiring the image.
When the panoramic depth map comprises the cavity area, the cavity area in the panoramic depth map can be filled to obtain a non-cavity panoramic depth map, so that each patrol image can be projected into the non-cavity panoramic depth map according to the corresponding relation between the non-cavity panoramic depth map and each patrol image, and the panoramic image can be obtained. In one embodiment, the void region may be filled in various ways known in the art. As an example, a gaussian convolution method may be employed to fill the void region. In another embodiment, the method may include acquiring a plurality of inspection images of the inspected area again, acquiring image acquisition status information corresponding to each inspection image, and depth image information of each inspection image, acquiring relative coordinates of each acquired inspection image according to the acquired image acquisition status information again, and filling a cavity area in the panoramic depth image according to the relative coordinates and the depth image information of each acquired inspection image.
In step S103, cargo indication information is acquired from the panoramic image.
The cargo indication information is used for indicating the type and the position of at least one cargo in the panoramic image.
In an embodiment of the present disclosure, the cargo indication information is obtained from a panoramic image, and the cargo indication information may be obtained by performing image recognition on the panoramic image.
In an embodiment of the present disclosure, the type of the goods indicated by the goods indication information may be understood as one or more of an identifier, a name, a code, a shape and size, a package, a price, a production date, a product expiration date, a manufacturer, a discount, and the like, which may be used to indicate the goods according to the type of the goods. The position of the item indicated by the item indicating information can be understood as indicating the relative position of the item in the inspected area or the position of the item on which item storage device (e.g., shelf) in the inspected area according to the position of the item.
According to the technical scheme, a plurality of inspection area patrol images, image acquisition state information corresponding to each patrol image and depth image information of each patrol image are obtained, wherein the image acquisition state information is used for indicating the spatial position and the image acquisition direction of area inspection equipment when the patrol images are acquired; and then splicing the plurality of patrol images according to the image acquisition state information and the depth image information of each patrol image to acquire a panoramic image of the inspected area, and acquiring goods indication information for indicating the type and the position of at least one goods in the panoramic image according to the panoramic image, wherein the panoramic image of the inspected area can reflect all image information of the goods stored in the inspected area, but the data amount is less than the total data amount of the patrol images of the plurality of inspected areas, so that the technical scheme can automatically acquire goods indication information for indicating the type and the position of the goods in the inspected area, reduce the manpower consumption, simultaneously ensure that the occupied data processing resources are less on the premise of not reducing the accuracy of the acquired goods indication information, and improve the efficiency of detecting the goods in the inspected area.
In one implementation of the present disclosure, the method further includes the following steps:
and acquiring at least one item of goods display information, the area information of the inspected area, shelf indication information and storage equipment indication information according to the panoramic image.
The goods display information is used for indicating characters corresponding to at least one piece of goods in the panoramic image, the area information of the inspected area is used for indicating the area of the inspected area, the shelf indication information is used for indicating the type and the position of at least one shelf in the panoramic image, and the storage device indication information is used for indicating the type and the position of at least one storage device in the panoramic image.
In one embodiment of the present disclosure, the characters corresponding to the goods may be characters, letters, numbers, punctuation marks, and the like corresponding to the goods. The goods display information can be obtained by performing character recognition on characters in the panoramic image, wherein the distance between the characters and the goods is smaller than or equal to a character distance threshold value. For example, the characters corresponding to the good may be characters used to provide a description of the good, a description of the promotional interaction scheme, or a price indication for the good.
The type of the shelf indicated by the shelf indication information may be understood as one or more of an identification, a name, a code, a shape size, a size of the receivable goods, and a number of the receivable goods of the shelf according to the type of the shelf. The position of the shelf indicated by the shelf indication information can be understood as the relative position of the shelf in the inspected area or which specific goods in the inspected area the shelf is used for carrying according to the position of the shelf.
A storage device is understood to mean a device for receiving or storing goods in the area to be inspected, in addition to a rack, for example an area inspection device or the like. The type of the storage device indicated by the storage device indication information may be understood as an identification, a name, a code, a shape and size, a size of the receivable goods, a number of the receivable goods, power, a rated voltage, a rated current, an internet protocol address, an on/off state, and a cooling state of the storage device according to the type of the storage device. One or more of heating states. The position of the storage device indicated by the storage device indication information may be understood as a relative position of the storage device in the inspected area or a specific cargo in the inspected area carried by the storage device according to the position of the storage device.
According to the technical scheme, at least one of the goods display information, the area information of the inspected area, the goods shelf indication information and the storage equipment indication information is acquired according to the panoramic image, so that the information which is helpful for managing goods stored in the inspected area can be conveniently acquired by a merchant or an enterprise, and the user experience is improved.
In one implementation manner of the present disclosure, in step S103, the cargo indication information is obtained according to the panoramic image, which may be implemented by the following steps:
and acquiring a pre-trained cargo identification model, and inputting the panoramic image into the cargo identification model to acquire cargo indication information output by the cargo identification model.
In an embodiment of the present disclosure, the cargo identification model may be stored in the area inspection apparatus in advance, or may be obtained from other devices or systems. The goods identification model may be a Neural Network (NN) model, a Convolutional Neural Network (CNN) model, a long-short-term memory (LSTM) model, or the like.
In this embodiment, the accuracy of the acquired cargo indication information can be improved by acquiring a pre-trained cargo identification model and inputting the panoramic image into the cargo identification model to acquire the cargo indication information output by the cargo identification model.
In one implementation of the present disclosure, the method further comprises:
sending goods indication information and a plurality of inspection area patrol images;
receiving indication information verification information sent by an indication information verification server in response to the cargo indication information and the inspection images of the plurality of inspected areas;
judging whether the indication information verification information indicates that the cargo indication information does not meet the indication information verification condition;
responding to the indication information verification information that the indication information of the goods does not meet the indication information verification condition, displaying additional recording prompt information, wherein the additional recording prompt information is used for prompting the input of the additional recording information of the goods;
and acquiring cargo additional recording information, and correcting the cargo indication information according to the cargo additional recording information.
In an embodiment of the present disclosure, the indication information verification server may display, through a human-computer interaction device such as a display screen, a plurality of received inspection images of the inspected area, so that a worker may manually inspect the inspected area according to the displayed inspection images, and input a manual inspection result through the human-computer interaction device such as a keyboard, and the indication information verification server verifies the indication information of the received goods according to the type and position of the goods indicated by the manual inspection result, so as to generate the indication information verification information. When the type and the position of the goods indicated by the manual inspection result are not matched with the type and the position of the goods indicated by the goods indication information, the generated indication information verification information can indicate that the goods indication information does not meet the indication information verification condition.
In an embodiment of the present disclosure, the displaying of the supplementary recording prompt information may be displaying the supplementary recording prompt information through a human-computer interaction device (e.g., a display screen, a speaker, etc.) on the area inspection device. The cargo entry information is input through a human-computer interaction device (such as a keyboard, a touch pad, a touch screen, a microphone and the like) on the area inspection device.
For example, the area inspection device may display the additional entry prompt information through an information additional entry interface displayed on the touch screen, and acquire the cargo additional entry information input by the user through the information additional entry interface. The supplementary information interface may be an interactive interface of an application (app) running on the area inspection device.
The information entry-supplementing interface displays one or more visual prompts on the touch screen, which can be used by a user to perform the cargo entry-supplementing information input action. The visual cue may provide a hint or reminder to the user of the cargo entry information input action. These visual cues may be text, graphics, or any combination thereof. The cargo additional information input action comprises contact with a touch screen. In some embodiments, the cargo entry information input action is performing at least one predetermined gesture on the touch screen. As used herein, a gesture is a movement of an object/accessory in contact with the touch screen. For example, the predetermined gesture may include contacting the touch screen at a target location on the information-complementing interface presented by the touch screen (an initialization gesture), and interrupting contact after maintaining continuous contact with the touch screen for more than a preset contact time threshold (completing the gesture).
For ease of illustration, in the process of obtaining user-entered cargo entry information, and in other embodiments described below, contact on the touch screen will be described as being performed by a user using at least one hand and using one or more fingers. It should be appreciated that the contact may be made using any suitable object or accessory, such as a stylus, finger, etc. The contact may include one or more taps on the touch screen, maintaining continuous contact with the touch screen, moving the point of contact while maintaining continuous contact, breaking contact, or any combination thereof.
The area inspection device detects a contact on the touch screen. If the contact does not correspond to an attempt to perform the cargo entry supplement information input action, or if the contact corresponds to an attempt by the user to fail or abort the cargo entry supplement information input action, the area inspection apparatus will not read the cargo entry supplement information corresponding to the cargo entry supplement information input action, which is previously stored in the area inspection apparatus. For example, if the cargo entry information input action is a contact with the touch screen at a target location on the information entry interface presented by the touch screen and the contact is interrupted after the duration of the contact with the touch screen exceeds a preset contact time threshold, and the detected contact is a series of random taps on the touch screen, then the contact does not correspond to the cargo entry information input action.
If the contact corresponds to the successful execution of the cargo entry information input action, that is, the user successfully executes the cargo entry information input action, the area inspection device reads cargo entry information corresponding to the cargo entry information input action, which is stored in the area inspection device in advance, to obtain the cargo entry information.
When the area inspection device obtains the cargo entry-supplementing information, as described above, the device may also display one or more visual cues corresponding to the cargo entry-supplementing information input action through the touch screen.
In addition to the visual cue, the electronic device may provide a non-visual cue to indicate the progress of completion of the cargo entry information input action. The non-visual cues may include audio cues (e.g., sound) or physical cues (e.g., vibration).
Fig. 3 illustrates a Graphical User Interface (GUI) diagram of a region inspection apparatus according to an embodiment of the present disclosure, fig. 4 illustrates a GUI diagram of a region inspection apparatus according to an embodiment of the present disclosure, and fig. 3-4 illustrate GUI diagrams of a GUI diagram of a region inspection apparatus at different points in a cargo entry information input action gesture execution process.
In fig. 3, the user indicates with the finger 21 that the finger 21 has not touched the touch screen 22 of the area inspection apparatus, and at this time, the area inspection apparatus will not read the additional entry information of the goods corresponding to the additional entry information input operation of the goods, which is stored in the area inspection apparatus in advance. In fig. 4, the user starts to perform the cargo patch information input action by touching the touch screen 22 of the area inspection apparatus with his finger 21. Specifically, the fact that the user's finger 21 is interacting with the information entry icon 24 by touching the information entry icon 24 in the information entry interface 23 displayed on the touch screen 22 is determined as an attempt to input the cargo entry information. If the finger 21 of the user continuously contacts with the position corresponding to the information entry-supplementing icon 24 on the touch screen 22 and the time for which the finger 21 of the user continuously contacts with the touch screen 22 exceeds the preset contact time threshold, it can be determined that one cargo entry-supplementing information input action is completed. The area inspection apparatus may read the cargo entry information corresponding to the completion number stored in the area inspection apparatus in advance according to the completion number of the cargo entry information input action to acquire the cargo entry information, and may display prompt information associated with the cargo entry information input action on the touch screen 22.
In this embodiment, by sending the cargo indication information and the inspection images of the plurality of inspected areas, and receiving indication information check information sent by the indication information check server in response to the cargo indication information and the inspection images of the plurality of inspected areas, it is determined whether the indication information check information indicates that the cargo indication information does not satisfy the indication information check condition, and in response to the indication information check information indicating that the cargo indication information does not satisfy the indication information check condition, additional entry prompt information for prompting to input cargo additional entry information is displayed, cargo additional entry information is obtained, and the cargo indication information is corrected according to the cargo additional entry information, so that it can be ensured that the accuracy of the trimmed cargo indication information is high.
In one implementation of the present disclosure, the method further comprises:
and taking the panoramic image as input, taking the corrected cargo indication information as output, and training the cargo identification model.
In the embodiment, the panoramic image is used as input, the corrected goods indication information is used as output, and the goods identification model is trained, so that the goods identification model can learn the rule between the panoramic image corresponding to the area to be inspected and the goods indication information of goods which are difficult to identify in the panoramic image, and the trained goods identification model can acquire the goods indication information of goods which are difficult to identify in the original panoramic image.
In an implementation manner of the present disclosure, before training a cargo recognition model, the method further includes, with the panoramic image as an input and the modified cargo indication information as an output:
and receiving the updated weight parameter sent by the edge server, and updating the article identification model on the area inspection equipment according to the updated weight parameter.
Taking the panoramic image as input, taking the corrected goods indication information as output, training the article identification model, and comprising the following steps of:
taking the panoramic image as input, taking the corrected goods indication information as output, and training the updated article identification model on the area inspection equipment;
when the trained article identification model on the area inspection equipment is not converged, acquiring a gradient update vector according to the trained article identification model on the area inspection equipment, and sending the gradient update vector to an edge server;
and when the trained article identification model on the area inspection equipment converges, storing the trained article identification model on the area inspection equipment as the pre-trained article identification model.
The edge server is used for aggregating the gradient update vectors and updating the weight parameters of the article identification model on the edge server according to the aggregated gradient update vectors so as to obtain updated weight parameters. The edge server may be a cloud server or a server provided by a regional inspection service provider. It should be noted that one edge server may correspond to one or more regional inspection devices, for example, a regional inspection service provider may divide the administered region into a plurality of blocks, and a plurality of regional inspection devices in each block may correspond to one edge server.
The article identification model on the edge server can be a neural network model, a convolutional neural network model, a long-short term memory network model or the like.
In the technical solution of this embodiment, the updated weight parameters sent by the edge server and received by the area inspection device are obtained by aggregating the gradient update vectors sent by the plurality of area inspection devices by the edge server and updating the weight parameters of the article identification model on the edge server according to the aggregated gradient update vectors, so that the updated article identification model on the area inspection device can reflect the common rule between the panoramic image and the image identification result (i.e., the modified cargo indication information) learned by the article identification model on the edge server in the previous training round. Then, taking the panoramic image of the area inspection equipment as input, taking the corrected goods indication information corresponding to the input panoramic image as output, training the updated article identification model on the area inspection equipment, enabling the updated article identification model on the area inspection equipment to learn the common regularity, and also enabling the individualized learning to be carried out on the data acquired by the area inspection equipment, so that the article identification model on the trained area inspection equipment can learn the private rule between the panoramic image acquired by the area inspection equipment and the corrected goods indication information corresponding to the panoramic image; when the article identification model on the trained area inspection equipment is not converged, the article identification model on the trained area inspection equipment still needs to be trained, and the edge server can continuously obtain corresponding update weight parameters based on the gradient update vectors uploaded by the plurality of area inspection equipment by obtaining the gradient update vectors according to the article identification model on the trained area inspection equipment and sending the gradient update vectors, so that the article identification models on the area inspection equipment are continuously trained; when the article identification model on the trained area inspection equipment converges, it can be considered that the article identification model on the converged area inspection equipment can acquire more accurate cargo indication information based on the panoramic image acquired by the article identification model on the area inspection equipment, and the article identification model converged on the area inspection equipment can be stored as the pre-trained article identification model.
In the embodiment, on one hand, the finally obtained pre-trained article identification model can be a model which can learn not only the common rule but also the private rule, and the goods indication information obtained according to the article identification model is more accurate; on the other hand, as the process of continuously training the article identification models on the respective area inspection devices is executed by the area inspection devices and the edge server, compared with the process of training the article identification models only by the server, the processing resources required at the server end are less, and the training speed is higher.
In one implementation manner of the present disclosure, before acquiring a plurality of patrol images of an inspected area, image capture state information corresponding to each patrol image, and depth image information of each patrol image in step S101, the method further includes:
acquiring inspection task information, and acquiring position information of at least one inspected area according to the inspection task information;
acquiring position information of the area inspection equipment, and acquiring inspection navigation path information according to the position information of at least one inspected area and the position information of the area inspection equipment;
and displaying and checking navigation path information.
In an embodiment of the present disclosure, the inspection task information may be used to indicate the latitude and longitude of the at least one inspected area, or the inspection task information may be used to indicate the address information of the at least one inspected area.
The method comprises the steps of obtaining position information of regional inspection equipment, wherein the position information of the regional inspection equipment can be real-time positioning information sent by a global navigation satellite system introduced through a wireless communication module on the regional inspection equipment; the location information of the area checking device may also be obtained according to the real-time location information sent by the terminal device paired with the area checking device, where the terminal device paired with the area checking device may be a terminal device paired with the area checking device through bluetooth or a wireless local area network.
The navigation path information of the display inspection may be displayed and inspected by a human-computer interaction device on the area inspection equipment or a human-computer interaction device connected with the area inspection equipment, such as a display screen, a speaker, a receiver, and the like, or displayed and inspected by a terminal device paired with the area inspection equipment.
In the embodiment, by acquiring the inspection task information, acquiring the position information of the at least one inspected area according to the inspection task information, acquiring the position information of the area inspection equipment, acquiring the inspection navigation path information according to the position information of the at least one inspected area and the position information of the area inspection equipment, and displaying the inspection navigation path information, a user carrying the area inspection equipment can conveniently acquire the inspection navigation path information, so that the user can go to the at least one inspected area indicated by the inspection task information based on the path indicated by the navigation path information, the possibility of the user getting lost in the process of going to the inspected area is reduced, and the efficiency of inspecting the inspected area is improved.
In one implementation manner of the present disclosure, in step S101, acquiring a plurality of inspection images of an inspected area, image capture state information corresponding to each inspection image, and depth image information of each inspection image may be implemented by the following steps:
when the position of the area inspection equipment is matched with the position of at least one inspected area, acquiring a plurality of inspection area patrol images, image acquisition state information corresponding to each patrol image and depth image information of each patrol image.
In an embodiment of the present disclosure, when the position of the area inspection apparatus matches the position of the at least one inspected area, it may be understood that a distance difference between the position of the area inspection apparatus and the position of the at least one inspected area is less than or equal to a distance difference threshold, where the distance difference threshold may be stored in the area inspection apparatus in advance, or may be obtained from a device or a system.
In this embodiment, by acquiring a plurality of patrol images of the inspected area, image capture state information corresponding to each patrol image, and depth image information of each patrol image when the position of the area inspection apparatus matches the position of at least one inspected area, it can be ensured that the acquired patrol images, image capture state information corresponding to each patrol image, and depth image information of each patrol image all correspond to the inspected area, thereby improving the reliability of the acquired data.
In one implementation of the present disclosure, the method further comprises:
acquiring audio information of the inspected region when the position of the region inspection apparatus matches the position of the at least one inspected region;
and acquiring communication frequency information and language attitude information according to the audio information, and generating interaction state indication information according to the communication frequency information and the language attitude information.
In one embodiment of the present disclosure, the audio information of the inspected area is acquired, which may be acquired by an audio acquisition device, such as a microphone, etc., on the area inspection equipment; the terminal device may be a terminal device that is paired with the area inspection device through bluetooth or a wireless local area network.
The communication frequency information may be understood as a frequency that can be used to instruct the users around the area inspection apparatus to perform voice communication, and for example, the frequency may include five voice communication frequency levels of very cold, general, hot and very hot. The language attitude information may be understood as attitude that can be used to indicate the attitude of the user around the area inspection apparatus when performing voice communication, for example, the attitude may include five language attitude levels, i.e., poor, normal, good, and good.
Acquiring communication frequency information and language attitude information according to the audio information, performing voice recognition on the audio information to acquire dialogue information corresponding to the audio information, and acquiring the communication frequency information and the language attitude information according to the dialogue information; or a frequency attitude recognition model trained in advance can be obtained, the audio information is used as input and is input into the frequency attitude recognition model, so that the communication frequency information and the language attitude information output by the frequency attitude recognition model are obtained.
In the embodiment, when the position of the area inspection equipment is matched with the position of at least one inspected area, the audio information of the inspected area is acquired, the communication frequency information and the language attitude information are acquired according to the audio information, and the interaction state indication information is generated according to the communication frequency information and the language attitude information, so that the user can conveniently understand the communication desire of the user around the area inspection equipment during voice communication, the user can be helped to improve the voice communication method in time, and the communication efficiency is improved.
In one implementation of the present disclosure, generating interaction state indication information according to communication frequency information and speech attitude information includes:
and acquiring a pre-trained interaction state recognition model, and inputting communication frequency information and language attitude information into the interaction state recognition model to acquire interaction state indication information output by the interaction state recognition model.
In an embodiment of the present disclosure, the interaction state recognition model may be pre-stored in the area inspection apparatus, or may be obtained from other devices or systems. The interaction state recognition model can be a neural network model, a convolutional neural network model or a long-short term memory network model and the like.
In this embodiment, the accuracy of the obtained interaction state indication information can be improved by obtaining a pre-trained interaction state identification model and inputting the communication frequency information and the speech attitude information into the interaction state identification model to obtain the interaction state indication information output by the interaction state identification model.
Fig. 5 illustrates an overall flowchart of a region inspection method according to an embodiment of the present disclosure, and as illustrated in fig. 5, the region inspection method includes:
in step S201, inspection task information is acquired, and position information of at least one inspected area is acquired according to the inspection task information.
In step S202, position information of the area inspection apparatus is acquired, and inspection navigation path information is acquired based on the position information of the at least one inspected area and the position information of the area inspection apparatus.
In step S203, when the position of the area inspection apparatus matches the position of at least one inspected area, the patrol images of a plurality of inspected areas, the image capture state information corresponding to each patrol image, and the depth image information of each patrol image are acquired.
In step S204, according to the image capture state information and the depth image information of each patrol image, a plurality of patrol images are stitched to obtain a panoramic image of the examined area.
In step S205, at least one of the display information of the goods, the area information of the inspected area, the indication information of the shelf, and the indication information of the storage device is obtained according to the panoramic image.
In step S206, a pre-trained cargo identification model is obtained, and the panoramic image is input into the cargo identification model to obtain cargo indication information output by the cargo identification model.
In step S207, the cargo instruction information and the patrol images of the plurality of inspected areas are transmitted.
In step S208, the instruction information verification server receives instruction information verification information transmitted in response to the cargo instruction information and the patrol images of the plurality of inspected areas.
In step S209, it is determined whether the indication information check information indicates that the cargo indication information does not satisfy the indication information check condition.
In step S210, in response to the indication information verification information indicating that the cargo indication information does not satisfy the indication information verification condition, displaying the additional recording prompt information, where the additional recording prompt information is used to prompt input of cargo additional recording information.
In step S211, cargo entry information is acquired, and the cargo instruction information is corrected according to the cargo entry information.
In step S212, the panoramic image is input, and the corrected cargo instruction information is output, and the cargo recognition model is trained.
In step S213, when the position of the region inspection apparatus matches the position of at least one inspected region, audio information of the inspected region is acquired.
In step S214, communication frequency information and speech attitude information are obtained according to the audio information.
In step S215, a pre-trained interaction state recognition model is obtained, and the communication frequency information and the speech dynamics information are input into the interaction state recognition model to obtain interaction state indication information output by the interaction state recognition model.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 6 shows a schematic block diagram of a region inspection apparatus control device according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both. As shown in fig. 6, the area inspection device control apparatus includes:
a patrol information acquiring module 301 configured to acquire patrol images of a plurality of inspected areas, image acquisition state information corresponding to each patrol image, and depth image information of each patrol image, where the image acquisition state information is used to indicate a spatial position and an image acquisition direction of an area inspection device when the patrol image is acquired;
a panoramic image acquisition module 302 configured to splice a plurality of patrol images according to the image acquisition state information to acquire a panoramic image of the inspected area;
the inventory unit information acquiring module 303 is configured to acquire cargo indication information according to the panoramic image, where the cargo indication information is used to indicate a type and a location of at least one cargo in the panoramic image.
According to the technical scheme, a plurality of inspection area patrol images, image acquisition state information corresponding to each patrol image and depth image information of each patrol image are obtained, wherein the image acquisition state information is used for indicating the spatial position and the image acquisition direction of area inspection equipment when the patrol images are acquired; and then splicing the plurality of patrol images according to the image acquisition state information and the depth image information of each patrol image to acquire a panoramic image of the inspected area, and acquiring goods indication information for indicating the type and the position of at least one goods in the panoramic image according to the panoramic image, wherein the panoramic image of the inspected area can reflect all image information of the goods stored in the inspected area, but the data amount is less than the total data amount of the patrol images of the plurality of inspected areas, so that the technical scheme can automatically acquire goods indication information for indicating the type and the position of the goods in the inspected area, reduce the manpower consumption, simultaneously ensure that the occupied data processing resources are less on the premise of not reducing the accuracy of the acquired goods indication information, and improve the efficiency of detecting the goods in the inspected area.
The present disclosure also discloses an electronic device, fig. 7 shows a schematic structural block diagram of an electronic device according to an embodiment of the present disclosure, and as shown in fig. 7, the electronic device 400 includes a memory 401 and a processor 402; wherein the memory 401 is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor 402 to implement the above-described method steps.
Fig. 8 is a schematic block diagram of a computer system suitable for implementing a region inspection method according to an embodiment of the present disclosure. As shown in fig. 8, the computer system 500 includes a processing unit 501 which can execute various processes in the above-described embodiments according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the system 500 are also stored. The processing unit 501, the ROM502, and the RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. A drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary. The processing unit 501 may be implemented as a CPU, a graphic processor, a TPU, an FPGA, a neural network processor, or other processing units.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
In addition, the present disclosure also provides a computer program product having a computer program stored therein, which, when executed by a processor, causes the processor to at least implement the method as provided in the preceding embodiments.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A method of regional inspection, the method comprising:
acquiring patrol images of a plurality of inspected areas, image acquisition state information corresponding to each patrol image and depth image information of each patrol image, wherein the image acquisition state information is used for indicating the spatial position and the image acquisition direction of area inspection equipment when the patrol images are acquired;
splicing the plurality of patrol images according to the image acquisition state information and the depth image information of each patrol image to obtain a panoramic image of the inspected area;
and acquiring goods indication information according to the panoramic image, wherein the goods indication information is used for indicating the type and the position of at least one goods in the panoramic image.
2. The area inspection method according to claim 1, wherein said acquiring of the cargo indication information from the panoramic image includes:
and acquiring a pre-trained cargo identification model, and inputting the panoramic image into the cargo identification model to acquire the cargo indication information output by the cargo identification model.
3. The area inspection method according to claim 2, further comprising:
sending the cargo indication information and the plurality of inspection area patrol images;
receiving indication information verification information sent by an indication information verification server in response to the cargo indication information and the inspection images of the plurality of inspected areas;
judging whether the indication information verification information indicates that the cargo indication information does not meet indication information verification conditions;
responding to the indication information verification information indicating that the cargo indication information does not meet the indication information verification condition, and displaying additional recording prompt information, wherein the additional recording prompt information is used for prompting to input cargo additional recording information;
acquiring cargo additional information, and correcting the cargo indication information according to the cargo additional information.
4. The area inspection method according to claim 3, further comprising:
and taking the panoramic image as input, taking the corrected cargo indication information as output, and training the cargo identification model.
5. The area inspection method according to any one of claims 1 to 4, wherein before acquiring the patrol images of the plurality of inspected areas, the image acquisition state information corresponding to each patrol image, and the depth image information of each patrol image, the method further comprises:
acquiring inspection task information, and acquiring position information of at least one inspected area according to the inspection task information;
acquiring position information of the area inspection equipment, and acquiring inspection navigation path information according to the position information of the at least one inspected area and the position information of the area inspection equipment;
and displaying the information of the checking navigation path.
6. The area inspection method according to claim 5, wherein the acquiring patrol images of a plurality of inspected areas, image acquisition state information corresponding to each patrol image and depth image information of each patrol image comprises:
when the position of the area inspection equipment is matched with the position of the at least one inspected area, acquiring the patrol images of the plurality of inspected areas, the image acquisition state information corresponding to each patrol image and the depth image information of each patrol image.
7. The area inspection method according to claim 5, further comprising:
acquiring audio information of the inspected area when the position of the area inspection apparatus matches the position of the at least one inspected area;
and acquiring communication frequency information and language attitude information according to the audio information, and generating interaction state indication information according to the communication frequency information and the language attitude information.
8. The method according to claim 7, wherein generating interaction status indication information according to the communication frequency information and the speech dynamics information comprises:
and acquiring a pre-trained interaction state recognition model, and inputting the communication frequency information and the language attitude information into the interaction state recognition model to acquire the interaction state indication information output by the interaction state recognition model.
9. The area inspection method according to any one of claims 1 to 4, further comprising:
acquiring at least one item of goods display information, inspected area information, shelf indication information and storage device indication information according to the panoramic image, wherein the goods display information is used for indicating characters corresponding to at least one goods in the panoramic image, the inspected area information is used for indicating the area of the inspected area, the shelf indication information is used for indicating the type and the position of at least one shelf in the panoramic image, and the storage device indication information is used for indicating the type and the position of at least one storage device in the panoramic image.
10. An electronic device comprising a memory, a processor, and a computer program stored on the memory, wherein the processor executes the computer program to implement the method of any of claims 1-9.
11. A computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the method of any of claims 1-9.
12. A computer program product comprising computer instructions, characterized in that the computer instructions, when executed by a processor, implement the method of any of claims 1-9.
CN202111198422.0A 2021-10-14 2021-10-14 Area inspection method, medium, and product Pending CN115983758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111198422.0A CN115983758A (en) 2021-10-14 2021-10-14 Area inspection method, medium, and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111198422.0A CN115983758A (en) 2021-10-14 2021-10-14 Area inspection method, medium, and product

Publications (1)

Publication Number Publication Date
CN115983758A true CN115983758A (en) 2023-04-18

Family

ID=85972587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111198422.0A Pending CN115983758A (en) 2021-10-14 2021-10-14 Area inspection method, medium, and product

Country Status (1)

Country Link
CN (1) CN115983758A (en)

Similar Documents

Publication Publication Date Title
EP2940556B1 (en) Command displaying method and command displaying device
WO2020224479A1 (en) Method and apparatus for acquiring positions of target, and computer device and storage medium
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN111461097A (en) Method, apparatus, electronic device and medium for recognizing image information
CN111400605A (en) Recommendation method and device based on eyeball tracking
CN111738365B (en) Image classification model training method and device, computer equipment and storage medium
CN111027490A (en) Face attribute recognition method and device and storage medium
CN113515987B (en) Palmprint recognition method, palmprint recognition device, computer equipment and storage medium
CN110647881A (en) Method, device, equipment and storage medium for determining card type corresponding to image
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN110795660B (en) Data analysis method, data analysis device, electronic device, and medium
CN110377914B (en) Character recognition method, device and storage medium
CN111353513B (en) Target crowd screening method, device, terminal and storage medium
CN111695629A (en) User characteristic obtaining method and device, computer equipment and storage medium
CN110837557A (en) Abstract generation method, device, equipment and medium
CN115983758A (en) Area inspection method, medium, and product
CN111652432A (en) Method and device for determining user attribute information, electronic equipment and storage medium
CN112163677A (en) Method, device and equipment for applying machine learning model
CN111897709A (en) Method, device, electronic equipment and medium for monitoring user
CN111898535A (en) Target identification method, device and storage medium
CN112308104A (en) Abnormity identification method and device and computer storage medium
CN111325083A (en) Method and device for recording attendance information
CN111757146A (en) Video splicing method, system and storage medium
CN116704080B (en) Blink animation generation method, device, equipment and storage medium
CN113673427B (en) Video identification method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination