CN114202714A - Tray motion state detection method and device, electronic equipment and readable medium - Google Patents

Tray motion state detection method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN114202714A
CN114202714A CN202011189804.2A CN202011189804A CN114202714A CN 114202714 A CN114202714 A CN 114202714A CN 202011189804 A CN202011189804 A CN 202011189804A CN 114202714 A CN114202714 A CN 114202714A
Authority
CN
China
Prior art keywords
tray
image
image sequence
sequence
tray motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011189804.2A
Other languages
Chinese (zh)
Inventor
刘柳
黄龚
韩志林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shifang Technology Co ltd
Original Assignee
Hangzhou Shifang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shifang Technology Co ltd filed Critical Hangzhou Shifang Technology Co ltd
Priority to CN202011189804.2A priority Critical patent/CN114202714A/en
Publication of CN114202714A publication Critical patent/CN114202714A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the disclosure discloses a tray motion state detection method and device, electronic equipment and a readable medium. One embodiment of the method comprises: acquiring a tray motion video; generating a first image sequence based on the tray motion video; generating a second image sequence based on the first image sequence; generating tray detection parameters based on the second image sequence; and determining the tray motion state information in response to determining that the tray detection parameters meet the first predetermined condition. The embodiment reduces the use of various hardware devices and application cost, so that the tray motion detection method can be applied to various fields.

Description

Tray motion state detection method and device, electronic equipment and readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a tray motion state detection method and device, electronic equipment and a readable medium.
Background
The detection of the tray motion state is a technology for determining the motion state of the tray by detecting and analyzing the tray motion data. The current tray motion state detection technology usually adopts more hardware devices (such as infrared devices, ultrasonic devices, laser radar devices and the like) to extract tray motion data, and then analyzes the tray motion data to obtain the tray motion state.
However, when the tray motion state is detected in the above manner, the following technical problems often occur:
firstly, more hardware devices are adopted to extract the tray motion data, and the cost of tray motion detection is higher due to the use of more hardware devices, so that the method is difficult to be widely applied;
secondly, when the tray motion data is analyzed in the above manner, a large number of data processing methods (e.g., edge detection, background subtraction, etc.) are involved, so that the time consumed for analyzing the tray motion state is long, and the efficiency of tray motion state detection is low.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a tray motion state detection method, apparatus, electronic device and readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a tray motion state detection method, including: acquiring a tray motion video; generating a first image sequence based on the tray motion video; generating a second image sequence based on the first image sequence; generating tray detection parameters based on the second image sequence; and determining the tray motion state information in response to determining that the tray detection parameters meet the first predetermined condition.
In a second aspect, some embodiments of the present disclosure provide a tray motion state detection method, the apparatus including: an acquisition unit configured to acquire a tray motion video; a first generation unit configured to generate a first image sequence based on the tray motion video; a second generation unit configured to generate a second image sequence based on the first image sequence; a third generating unit configured to generate a tray detection parameter based on the second image sequence; a determining unit configured to determine tray motion state information in response to determining that the tray detection parameter satisfies a first predetermined condition.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: the tray motion detection method of some embodiments of the present disclosure can reduce the use of excessive hardware devices, and reduce the use cost of the tray motion detection method. Specifically, the reason why the cost of the related tray movement detection method is high is that: the related tray motion detection method uses more hardware devices when acquiring the tray motion data. Based on this, in the tray motion detection method of some embodiments of the present disclosure, a tray motion video is first acquired. Therefore, the acquired tray motion video is used as data for detecting the tray motion state, only camera equipment is needed, more hardware equipment is not needed, and the cost is reduced. Then, a first image sequence is generated based on the tray motion video. Therefore, the tray motion video is processed, and the tray motion state is detected through the generated first image sequence. Then, a second image sequence is generated based on the first image sequence. Therefore, the first image sequence is screened, so that the second image sequence is selected to provide more accurate data for the tray motion detection. Then, tray detection parameters are generated based on the second image sequence. Therefore, the motion state of the tray is further judged by generating the tray detection parameters. And finally, in response to determining that the tray detection parameters meet the first preset condition, determining tray motion state information. Therefore, the tray detection parameters are finally judged to determine the tray motion state information. Because the embodiment only needs camera equipment and does not need more hardware equipment, the cost of the hardware equipment can be reduced. Therefore, the tray motion state detection method can be widely applied to various fields (such as the dish settlement field in the catering industry, the item settlement field in the retail industry and the like).
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic view of an application scenario of a tray motion state detection method according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a tray motion state detection method according to the present disclosure;
FIG. 3 is a schematic structural view of some embodiments of a pallet motion state detection apparatus according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of a tray motion state detection method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may obtain a tray motion video 102. The computing device 101 may then generate a first sequence of images 103 based on the above-described pallet motion video 102. Thereafter, the computing device 101 may generate a second image sequence 104 based on the first image sequence 103 described above. The computing device 101 may then generate tray detection parameters 105 based on the second sequence of images 104 described above. Finally, the computing device 101 may determine the tray motion state information 107 in response to determining that the tray detection parameter 105 described above satisfies the first predetermined condition 106. Optionally, the computing device 101 may send the tray motion state information 107 to a display terminal 108, so that the display terminal 108 displays the tray motion state information 107.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a tray motion state detection method according to the present disclosure is shown. The tray motion state detection method comprises the following steps:
step 201, a tray motion video is obtained.
In some embodiments, the executing subject of the tray motion state detection method (e.g., the computing device 101 shown in fig. 1) may obtain the tray motion video through a wired connection or a wireless connection.
As an example, the above-described tray motion video may be a tray motion video cut from a video stream captured by a camera. The camera may be a camera above the item settlement table. The captured video stream may be a video of the settlement section on the settlement station captured by the camera. In addition, the tray motion video may be a video from when the user starts moving the tray to the settlement area to when the user ends the tray moving process.
Step 202, generating a first image sequence based on the tray motion video.
In some embodiments, the execution subject may generate the first image sequence based on the tray motion video in various ways.
In some optional implementations of some embodiments, the executing body generating the first image sequence based on the tray motion video may include the following steps:
firstly, the tray motion video is intercepted to generate a tray image sequence.
As an example, the capturing of the tray motion video may be capturing the tray motion video at a speed of 30 frames/second to generate a tray image sequence. Wherein, the number of images in the graph sequence may be: 50 of the Chinese medicinal materials.
And secondly, zooming each tray image in the tray image sequence to generate a zoomed tray image sequence.
As an example, the scaling may be 1: 0.4. each tray image in the tray image sequence may be reduced according to the scaling ratio to generate a scaled tray image sequence. Wherein, the number of the zoomed tray images in the zoomed tray image sequence may be: 50 of the Chinese medicinal materials.
And thirdly, carrying out graying processing on each zoomed tray image in the zoomed tray image sequence to generate a first image sequence.
As an example, the number of first images in the first image sequence described above may be: 50 of the Chinese medicinal materials.
Step 203, generating a second image sequence based on the first image sequence.
In some embodiments, the executing entity may generate the second image sequence based on the first image sequence in various ways.
In some optional implementations of some embodiments, the executing subject generating the second image sequence based on the first image sequence may include:
first, a target image is acquired.
As an example, the target image may be an image of the checkout station area. In addition, the target image may be an image of a fixed area selected from the images of the checkout station areas. For example, the target image may be an area one-fourth of the upper right corner 1/16 of the image of the checkout station area.
And secondly, generating a tray motion parameter sequence based on the first image sequence and the target image. The closer the tray motion parameter is to 0, the lower the matching degree of the two corresponding images is, and the larger the area of the target area of the settlement table covered by the tray is.
As an example, the tray motion parameter may be: 0.1234.
and thirdly, selecting a first image of which the corresponding tray motion parameter meets a second preset condition from the first image sequence as a second image to obtain a second image sequence.
As an example, in practical applications, the above-mentioned tray can perform related operations normally even if it is a little worse and does not completely cover the checkout station target area. Therefore, the second predetermined condition may be: the tray motion parameter is less than 0.1. The tray motion parameter detection method can effectively avoid the situation that the tray covers the checkout counter but the tray motion parameter generated after detection does not meet the second preset condition, so that the accuracy of the tray motion detection is improved. The number of first images in the first image sequence may be: 50 of the Chinese medicinal materials. Then, the first image sequence may be sequentially screened, and when the tray motion parameter corresponding to the 22 nd first image satisfies the second condition, the first image and the first image that is not screened by the second condition may be used as a second image to obtain a second image sequence. Then, the number of second images in the second image sequence may be: 29 pieces.
In some optional implementations of some embodiments, the executing body generating a tray motion parameter sequence based on the first image sequence and the target image may include:
firstly, each first image in the first image sequence is cut to generate a target area image, and a target area image sequence is obtained.
As an example, the upper right corner of the first image may be clipped to obtain the target area image. The size of the target area image and the size of the target image can be the same.
And secondly, determining the number of pixel points of the target image and pixel values corresponding to the pixel points.
As an example, the number of pixel points of the target image may be: 9 pieces of the feed. The pixel points of the target image and the pixel values corresponding to the pixel points may be: {[(1,1),200],[(1,2),0],[(1,3),200],[(2,1),0],[(2,2),200],[(2,3),0],[(3,1),200],[(3,2),0],[(3,3),200]}.
And thirdly, determining the number of pixel points in each target area image in the target area image sequence and the pixel values corresponding to the pixel points.
As an example, the number of pixel points in each target area image in the target area image sequence may be: 9 pieces of the feed. The pixel point of each target area image in the target area image sequence and the pixel value corresponding to the pixel point may be: { [ (1, 1), 200], [ (1, 2), 200], [ (1, 3), 200], [ (2, 1), 200], [ (2, 2), 200], [ (2, 3), 0], [ (3, 1), 200], [ (3, 2), 200], [ (3, 3), 200] }; {[(1,1),200],[(1,2),0],[(1,3),200],[(2,1),200],[(2,2),200],[(2,3),0],[(3,1),200],[(3,2),0],[(3,3),200]}.
Fourthly, based on the target image and each target area image in the target area image sequence, generating tray motion parameters in the tray motion parameter sequence by the following formula:
Figure BDA0002752462530000071
wherein C represents the tray motion parameter. Omega1Representing a first preset weight. Omega2Representing a second preset weight. i represents a serial number. j represents a serial number. And N represents the number of pixel points in the target image. And E represents a pixel value corresponding to a pixel point in the target image. EiAnd the pixel value corresponding to the ith pixel point in the target image is shown. And M represents the number of pixel points in the target area image. And F represents the pixel value corresponding to the pixel point in the target area image. FjIndicating the pixel corresponding to the jth pixel point in the target area imageThe value is obtained.
As an example, the first preset weight may be: 0.1. the second preset weight may be 0.9. The value range of the first preset weight may be: [0,0.1]. The value range of the second preset weight may be 1 minus the first preset weight: [0,0.9]. The pixel points of the target image and the pixel values corresponding to the pixel points may be: {[(1,1),200],[(1,2),0],[(1,3),200],[(2,1),0],[(2,2),200],[(2,3),0],[(3,1),200],[(3,2),0],[(3,3),200]}. The number of pixels may be: 9 pieces of the feed. The pixel point of each target area image in the target area image sequence and the pixel value corresponding to the pixel point may be: { [ (1, 1), 200], [ (1, 2), 200], [ (1, 3), 200], [ (2, 1), 200], [ (2, 2), 200], [ (2, 3), 0], [ (3, 1), 200], [ (3, 2), 200], [ (3, 3), 200] }; {[(1,1),200],[(1,2),0],[(1,3),200],[(2,1),200],[(2,2),200],[(2,3),0],[(3,1),200],[(3,2),0],[(3,3),200]}. The number of the pixel points in each target area image in the target area image sequence may be 9. Then, the above-mentioned tray motion parameter sequence may be: [0.1502, 0.1127] (the calculation results retain 4 decimal places).
The above formula is an inventive point of the embodiments of the present disclosure, and solves the technical problem mentioned in the background art, i.e., "when the tray motion data is analyzed in the above manner, more data processing methods (e.g., edge detection, background difference, etc.) are involved, so that the time consumed for analyzing the tray motion state is longer, and the efficiency of tray motion state detection is lower". Factors that lead to a low efficiency of the detection of the tray motion state tend to be as follows: since a large number of data processing methods (e.g., edge detection, background differentiation, etc.) are involved when analyzing data in a conventional manner, a long time is consumed when analyzing the moving state of the tray. If the above-mentioned factors are solved, the analysis time of the tray motion state can be shortened, so that the tray motion state detection efficiency is improved. To achieve this effect, firstly, the above formula avoids a plurality of complex algorithms, and only the image pixels can be processed to generate the tray motion parameters, so that the time length for detecting the tray motion state can be reduced. In addition, the above formula introduces two ways to calculate the tray motion parameters, and introduces two preset weights to perform weight division on the two calculation ways, so that the influence factors of the image pixel values on the final detection result of the tray motion state can be fully considered, and the detection accuracy of the tray motion state is improved. In addition, since the above formula sufficiently considers the pixel value of the image, in this case, the present disclosure reduces the number of parameters required for tray motion state detection as much as possible, so that the time for tray motion state detection is reduced. Therefore, the effect of improving the detection efficiency of the motion state of the tray is achieved.
And step 204, generating tray detection parameters based on the second image sequence.
In some embodiments, the execution subject may generate the tray detection parameter in various ways based on the second image sequence.
In some optional implementations of some embodiments, the executing body generating tray detection parameters based on the second image sequence may include:
in a first step, the number of second images in the second image sequence is determined.
As an example, the number of second images in the second image sequence described above may be: 29 pieces.
And secondly, determining the number of the second images as tray detection parameters.
As an example, the tray detection parameter may be: 29.
in step 205, in response to determining that the tray detection parameter satisfies the first predetermined condition, tray motion state information is determined.
In some embodiments, the execution body may determine the tray motion state information in response to determining that the tray detection parameter satisfies a first predetermined condition.
As an example, the first predetermined condition may be: the tray detection parameter is greater than 25. The tray detection parameters may be: 29 satisfy the first predetermined condition described above. It can be determined that the motion state information of the tray at the current time may be: the tray stops moving.
In some optional implementation manners of some embodiments, the execution main body may send the tray motion state information to a display terminal, so that the display terminal displays the tray motion state information.
As an example, the tray motion state information may be: the tray stops moving. Then, the following may be displayed in the upper right corner of the display terminal: the tray stops moving. Furthermore, the user can receive the information of the motion state of the tray, and the execution main body can execute relevant operations such as detection and the like on the articles on the tray.
The above embodiments of the present disclosure have the following advantages: the tray motion detection method of some embodiments of the present disclosure can reduce the use of excessive hardware devices, and reduce the use cost of the tray motion detection method. Specifically, the reason why the cost of the related tray movement detection method is high is that: the related tray motion detection method uses more hardware devices when acquiring the tray motion data. Based on this, in the tray motion detection method of some embodiments of the present disclosure, a tray motion video is first acquired. Therefore, the acquired tray motion video is used as data for detecting the tray motion state, only camera equipment is needed, more hardware equipment is not needed, and the cost is reduced. Then, a first image sequence is generated based on the tray motion video. Thus, the tray motion video is processed so that the tray motion state can be detected by the generated first image sequence. Then, a second image sequence is generated based on the first image sequence. Therefore, the first image sequence is screened, so that the second image sequence is selected to provide more accurate data for the tray motion detection. Then, tray detection parameters are generated based on the second image sequence. Therefore, the motion state of the tray is further judged by generating the tray detection parameters. And finally, in response to determining that the tray detection parameters meet the first preset condition, determining tray motion state information. Therefore, the tray detection parameters are finally judged to determine the tray motion state information. Because the embodiment only needs camera equipment and does not need more hardware equipment, the cost of the hardware equipment can be reduced. Therefore, the tray motion state detection method can be widely applied to various fields (such as the dish settlement field in the catering industry, the item settlement field in the retail industry and the like).
With further reference to fig. 3, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of a tray motion state detection apparatus, which correspond to those of the method embodiments described above for fig. 2, and which may be particularly applied in various electronic devices.
As shown in fig. 3, the tray motion state detection apparatus 300 of some embodiments includes: an acquisition unit 301, a first generation unit 302, a second generation unit 303, a third generation unit 304, and a determination unit 305. Wherein, the acquiring unit 301 is configured to acquire the tray motion video. A first generating unit 302 configured to generate a first image sequence based on the tray motion video. A second generating unit 303 configured to generate a second image sequence based on the first image sequence. A third generating unit 304 configured to generate tray detection parameters based on the second image sequence. A determining unit 305 configured to determine tray motion state information in response to determining that the above tray detection parameter satisfies a first predetermined condition.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)400 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 404 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 404: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a tray motion video; generating a first image sequence based on the tray motion video; generating a second image sequence based on the first image sequence; generating tray detection parameters based on the second image sequence; and determining the tray motion state information in response to determining that the tray detection parameters meet the first predetermined condition.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first generation unit, a second generation unit, a third generation unit, and a determination unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires a tray motion video".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the technical method may be formed by replacing the above-mentioned features with (but not limited to) technical features having similar functions disclosed in the embodiments of the present disclosure.

Claims (9)

1. A tray motion state detection method, comprising:
acquiring a tray motion video;
generating a first image sequence based on the tray motion video;
generating a second image sequence based on the first image sequence;
generating tray detection parameters based on the second image sequence;
determining tray motion state information in response to determining that the tray detection parameter satisfies a first predetermined condition.
2. The method of claim 1, wherein the method further comprises:
and sending the tray motion state information to a display terminal so that the display terminal can display the tray motion state information.
3. The method of claim 2, wherein the generating a first sequence of images based on the tray motion video comprises:
intercepting the tray motion video to generate a tray image sequence;
zooming each tray image in the tray image sequence to generate a zoomed tray image sequence;
and carrying out graying processing on each zoomed tray image in the zoomed tray image sequence to generate a first image sequence.
4. The method of claim 3, wherein the generating a second sequence of images based on the first sequence of images comprises:
acquiring a target image;
generating a tray motion parameter sequence based on the first image sequence and the target image;
and selecting a first image of which the corresponding tray motion parameter meets a second preset condition from the first image sequence as a second image to obtain a second image sequence.
5. The method of claim 4, wherein the generating tray detection parameters based on the second sequence of images comprises:
determining a number of second images in the second sequence of images;
and determining the number of the second images as tray detection parameters.
6. The method of claim 5, wherein generating a sequence of tray motion parameters based on the first sequence of images and the target image comprises:
cutting each first image in the first image sequence to generate a target area image, and obtaining a target area image sequence;
determining the number of pixel points of the target image and pixel values corresponding to the pixel points;
determining the number of pixel points in each target area image in the target area image sequence and pixel values corresponding to the pixel points;
generating a tray motion parameter in the tray motion parameter sequence based on the target image and each target area image in the target area image sequence by the following formula:
Figure FDA0002752462520000021
wherein C represents the tray motion parameter; omega1Representing a first preset weight; omega2Representing a second preset weight; i represents a serial number; j represents a serial number; n represents the number of pixel points in the target image; e represents the pixel value corresponding to the pixel point in the target image; eiRepresenting a pixel value corresponding to an ith pixel point in the target image; m represents the number of pixel points in the target area image; f represents the pixel value corresponding to the pixel point in the target area image; fjAnd the pixel value corresponding to the jth pixel point in the target area image is represented.
7. A tray motion state detection device comprising:
an acquisition unit configured to acquire a tray motion video;
a first generation unit configured to generate a first image sequence based on the tray motion video;
a second generation unit configured to generate a second image sequence based on the first image sequence;
a third generating unit configured to generate a tray detection parameter based on the second image sequence;
a determination unit configured to determine tray motion state information in response to determining that the tray detection parameter satisfies a first predetermined condition.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
9. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN202011189804.2A 2020-10-30 2020-10-30 Tray motion state detection method and device, electronic equipment and readable medium Pending CN114202714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011189804.2A CN114202714A (en) 2020-10-30 2020-10-30 Tray motion state detection method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011189804.2A CN114202714A (en) 2020-10-30 2020-10-30 Tray motion state detection method and device, electronic equipment and readable medium

Publications (1)

Publication Number Publication Date
CN114202714A true CN114202714A (en) 2022-03-18

Family

ID=80645414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011189804.2A Pending CN114202714A (en) 2020-10-30 2020-10-30 Tray motion state detection method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN114202714A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016186803A (en) * 2016-06-21 2016-10-27 ローレルバンクマシン株式会社 Sales price payment system
CN108833801A (en) * 2018-07-11 2018-11-16 深圳合纵视界技术有限公司 Adaptive motion detection method based on image sequence
CN110069995A (en) * 2019-03-16 2019-07-30 浙江师范大学 A kind of service plate moving state identification method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016186803A (en) * 2016-06-21 2016-10-27 ローレルバンクマシン株式会社 Sales price payment system
CN108833801A (en) * 2018-07-11 2018-11-16 深圳合纵视界技术有限公司 Adaptive motion detection method based on image sequence
CN110069995A (en) * 2019-03-16 2019-07-30 浙江师范大学 A kind of service plate moving state identification method based on deep learning

Similar Documents

Publication Publication Date Title
CN109255337B (en) Face key point detection method and device
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN110516678B (en) Image processing method and device
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN112488783B (en) Image acquisition method and device and electronic equipment
CN110288625B (en) Method and apparatus for processing image
CN111459364B (en) Icon updating method and device and electronic equipment
CN115600629B (en) Vehicle information two-dimensional code generation method, electronic device and computer readable medium
CN111461968A (en) Picture processing method and device, electronic equipment and computer readable medium
CN112464039B (en) Tree-structured data display method and device, electronic equipment and medium
CN112818898B (en) Model training method and device and electronic equipment
CN111461965B (en) Picture processing method and device, electronic equipment and computer readable medium
CN112445394B (en) Screenshot method and screenshot device
CN114202714A (en) Tray motion state detection method and device, electronic equipment and readable medium
CN111461969B (en) Method, device, electronic equipment and computer readable medium for processing picture
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN111460334B (en) Information display method and device and electronic equipment
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN111461964B (en) Picture processing method, device, electronic equipment and computer readable medium
CN113642493A (en) Gesture recognition method, device, equipment and medium
CN110991312A (en) Method, apparatus, electronic device, and medium for generating detection information
CN113239943B (en) Three-dimensional component extraction and combination method and device based on component semantic graph
CN111489286B (en) Picture processing method, device, equipment and medium
CN114359673B (en) Small sample smoke detection method, device and equipment based on metric learning
CN112884794B (en) Image generation method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination