Back

G-2024-03

Optimizing multi-item slow-moving inventory using constrained Markov decision processes and column generation

, , , and

BibTeX reference

Inventory management for slow-moving items is challenging due to their high intermittence and lumpiness. Recent developments in machine learning and computational statistical techniques allow us to leverage complex distributions such as zero-inflated distributions to better characterize the demand functions of slow-moving items. Nevertheless, exploiting such outputs in the decision-making process remains a challenging task. We present an inventory optimization framework based on a coupled, constrained Markov decision process (CMDP) that is directly compatible with discrete demand functions. This approach can leverage complex discrete lead-time demand functions including empirical and zero-inflated distributions. The objective is to jointly determine inventory policies for multiple items under multiple target levels, which include common inventory measures such as stockout levels, fulfillment levels, and expected number of orders. To overcome the dimensionality issue, we employ a decomposition method based on a dual linear programming formulation of the CMDP and several computational enhancements. We propose a branch-and-price approach approach to solve the CMDP model exactly and a column generation heuristic. We provide computational comparisons with the approach in the literature as well as computational experiments using real-world data sets. The numerical results show we can solve CMDP efficiently, and its use in conjunction with empirical and zero-inflated negative binomial distributions outperforms benchmark and traditional approaches. The proposed framework provides practitioners with an efficient, flexible, and constructive tool to jointly manage the inventory of multiple items.

, 25 pages

Research Axis

Research application

Document

G2403.pdf (600 KB)