tag:www.gerad.ca,2005:/en/papersCahiers du GERAD2019-07-15T10:58:05-04:0020tag:www.gerad.ca,2005:Paper/26952019-07-15T10:58:05-04:002019-07-15T10:58:05-04:00G-2019-48 : Variable fixing for two-arc sequences in branch-price-and-cut algorithms on path-based modelsGuy Desaulniers, Timo Gschwind, Stefan Irnich
<p>Variable fixing by reduced costs is a popular technique for accelerating the solution process of mixed-integer linear programs. For vehicle routing problems solved by branch-price-and-cut algorithms, it is possible to fix to 0 the variables associated with all routes containing at least one arc from a subset of arcs determined according to the dual solution of a linear relaxation. This is equivalent to removing these arcs from the network used to generate the routes. In this paper, we extend this technique to routes containing sequences of two arcs. Such sequences or their arcs cannot be removed directly from the network because routes traversing only one arc of a sequence might still be allowed. For some of the most common vehicle routing problems, we show how this issue can be overcome by modifying the route generation labeling algorithm in order to remove indirectly these sequences from the network. The proposed acceleration strategy is tested on benchmark instances of the vehicle routing problem with time windows (VRPTW) and four variants of the electric VRPTW. The computational results show that it yields a significant speedup, especially for the most difficult instances.</p>2019-07-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26942019-07-15T10:55:21-04:002019-07-15T10:55:21-04:00G-2019-43 : Flexibility product for water heater aggregators on electricity marketsMarie Pied, Miguel F. Anjos, Roland P. Malhamé
<p>Intermittent renewable energy, such as solar and wind, brings uncertainty into the grid. To increase their contribution into the energy mix, load management solutions are necessary to correct the resulting typical mismatches between generation and demand. This can be achieved rather effectively with thermostatic loads such as space heaters or water heaters by considering them as means of storage. This article proposes a mean field game-based controller to provide load flexibility into the grid using a multi-layer water heater model. A uniform local state feedback law is used to track the temperature trajectory specified by an aggregator for the group of controlled devices. The law is computed via a near fixed-point algorithm. A scheduling problem for the desired mean water heater target temperatures over a time horizon is formulated to find the maximum flexibility available from the group of loads while maintaining the typical post-control load oscillations within predefined bounds over a fixed time period. The solution of the scheduling problem is obtained by solving a linear optimization problem with upper and lower bounds on the power drawn by the group to converge to an acceptable mean temperature schedule.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26932019-07-15T10:30:43-04:002019-07-15T10:52:48-04:00G-2019-41 : Monte Carlo and quasi-Monte Carlo density estimation via conditioningPierre L'Ecuyer, Florian Puchhammer, Amal Ben Abdellah
<p>Estimating the unknown density from which a given independent sample originates is more difficult than estimating the mean, in the sense that for the best popular density estimators, the mean integrated square error converges slower than at the canonical rate of
<code>\(\mathcal O(1/n)\)</code>. When the sample is generated from a simulation model and we have control over how this is done, we can do better. We study an approach in which conditional Monte Carlo permits one to obtain a smooth estimator of the cumulative distribution function, whose sample derivative is an unbiased estimator of the density at any point, and therefore converges at a faster rate than the usual density estimators, under mild conditions. By combining this with randomized quasi-Monte Carlo to generate the sample, we can achieve an even faster rate.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26922019-07-05T16:44:05-04:002019-07-05T16:44:05-04:00G-2019-45 : Nash equilibria in non-zero sum differential games with impulse controlUtsav Sadana, Georges Zaccour
<p>In this paper, we introduce a class of deterministic finite-horizon two-player non-zero-sum differential games where one player uses continuous control while the other player uses impulse control. We extend the Pontryagin maximum principle to continuous-time optimal-control problems that involve additional state-dependent costs as well as jumps in the state variable. Thereafter, we formulate the necessary and sufficient conditions for the existence of an open-loop Nash equilibrium in the dynamic games with impulse control. We show that the equilibrium timing of impulses can be obtained as a solution of a non-linear optimization problem for a linear-quadratic dynamic game. For the case of a linear-state dynamic game, we obtain analytical solutions for the equilibrium number, timing, and level of impulse. We illustrate our results using numerical experiments.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26912019-07-05T16:42:00-04:002019-07-05T16:42:00-04:00G-2019-44 : Challenges in energy policies for the economic integration of prosumers in electric energy systems: A critical survey with a focus on Ontario (Canada)Elizaveta Kuznetsova, Miguel F. Anjos
<p>The accessibility and reducing cost of distributed renewable energy sources are stimulating the emergence of small-scale residential prosumers who can produce and consume electricity. This may lead to various scenarios. Such prosumers may increase the uncertainty of consumption behavior, reduce consumption from the grid, and eventually disconnect from the grid.
However, they may remain connected, and their energy potential can provide flexibility for the overall system. Current policy in some jurisdictions promotes disconnection through tax increases, grid charges, and other non-commodity costs. In particular, in Ontario (Canada) only 8.7% of the typical electricity bill covers the cost of energy and power; the remainder subsidizes governmental energy-procurement contracts, compensates the grid, pays for environmental initiatives, and covers other taxes. The situation is aggravated by a lack of a global vision for the energy system and of coordinated actions to achieve this vision. We support the preferred scenario in which prosumers remain connected to the grid. As an alternative to Ontario's current attempts to artificially slow the increase in electricity prices,
we present an extended critical survey of energy policies to motivate a thoughtful reconsideration of current schemes for the economic integration of prosumers in the energy system.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26902019-07-05T16:37:36-04:002019-07-05T16:37:36-04:00G-2019-40 : Array-RQMC for option pricing under stochastic volatility modelsAmal Ben Abdellah, Pierre L'Ecuyer, Florian Puchhammer
<p>Array-RQMC has been proposed as a way to effectively apply randomized quasi-Monte Carlo (RQMC) when simulating a Markov chain over a large number of steps to estimate an expected cost or reward. The method can be very effective when the state of the chain has low dimension. For pricing an Asian option under an ordinary geometric Brownian motion model, for example, Array-RQMC can reduce the variance by factors in the millions. In this paper, we show how to apply this method and we study its effectiveness in case the underlying process has stochastic volatility. We show that Array-RQMC can also work very well for these models, even if it requires RQMC points in larger dimension. We examine in particular the variance-gamma, Heston, and Ornstein-Uhlenbeck stochastic volatility models, and we provide numerical results.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26892019-06-28T11:26:06-04:002019-07-03T10:14:37-04:00G-2019-38 : Likelihood evaluation of jump-diffusion models using deterministic nonlinear filtersJean-François Bégin, Mathieu Boudreault
<p>In this study, we develop a deterministic nonlinear filtering algorithm based on a high-dimensional version of Kitagawa (1987) to evaluate the likelihood function of models that allow for stochastic volatility and jumps whose arrival intensity is also stochastic. We show numerically that the deterministic filtering method is precise and much faster than the particle filter, in addition to yielding a smooth function over the parameter space. We then find the maximum likelihood estimates of various models that include stochastic volatility, jumps in the returns and variance, and also stochastic jump arrival intensity with the S&P 500 daily returns. During the Great Recession, the jump arrival intensity increases significantly and contributes to the clustering of volatility and negative returns.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26882019-06-28T11:23:30-04:002019-06-28T11:23:30-04:00G-2019-36 : Tulip: An open-source interior-point linear optimization solver with abstract linear algebraMiguel F. Anjos, Andrea Lodi, Mathieu Tanneau
<p>This paper introduces the algorithmic design and implementation of Tulip, an open-source interior-point solver for linear optimization.
It implements the homogeneous interior-point algorithm with multiple centrality corrections, and therefore handles unbounded and infeasible problems.
Tulip's most remarkable feature is that its algorithmic framework is fully disentangled from linear algebra implementations.
This allows to seamlessly integrate specialized routines for structured problems, which we illustrate in the case of structured problems found in Dantzig-Wolfe decomposition.
Extensive computational results are reported.
We find that, using general-purpose sparse linear algebra, Tulip is competitive with open-source interior-point solvers on the Netlib LP testset.
Furthermore, armed with specialized linear algebra, Tulip outperforms state-of-the-art commercial solvers on a set of large-scale, decomposable instances that arise in power systems operation.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26872019-06-26T14:07:01-04:002019-06-26T16:38:56-04:00G-2019-42 : Data association via set packing for computer vision applicationsJulian Yarkony, Yossiri Adulyasak, Maneesh Singh, Guy Desaulniers
<p>Significant progress has been made in the field of computer vision, due to the development of supervised machine learning algorithms, which efficiently extract information from high-dimensional data such as images and videos. Such techniques are particularly effective at recognizing the presence or absence of entities in the domains, where labeled data is abundant. However, supervised learning is not sufficient in applications where one needs to annotate each unique entity in crowded scenes respecting known domain specific structures of those entities. This problem, known as data association, provides fertile ground for the application of combinatorial optimization. In this paper, we present the computer vision applications, namely, multi-person tracking, multi-person pose estimation, and multi-cell segmentation, which can be formulated as integer linear programs with a massive number of variables. In order to solve this problem, column generation algorithms are applied to circumvent the need to enumerate all variables explicitly. To enhance the solution process, we provide a general approach for applying subset-row inequalities to tighten the formulations, and introduce novel dual optimal inequalities to reduce the dual search space. The proposed algorithms and their enhancements are successfully applied to solve the three aforementioned computer vision problems and achieve superior performance compared to benchmark approaches.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26862019-06-20T16:30:44-04:002019-06-20T16:30:44-04:00G-2019-35 : Spatio-temporal flexibility requirement envelopes for low-carbon power system energy managementYuchong Huo, François Bouffard, Géza Joós
<p>The deepening penetration of renewable power generation is challenging how the minute balancing of supply and demand is carried out by power system operators. Several proposals to short-term operational planning rely on robust optimization to offer guarantees on the ability of the operator to meet a wide array of possible scenarios. The main downside of these approaches is their conservative results whose operating costs and/or carbon tally may be sub-economical. Such results come by because these approaches put emphasis often on very low-probability portions of the uncertainty set they consider. Moreover, these approaches also often ignore the inherent time and spatial couplings of wind and solar generation variability. In this paper, we seek to reduce the conservativeness of these uncertainty sets by proposing the concept of spatio-temporal flexibility requirement envelopes. We show how it is able to efficiently capture and model the temporal trends and spatial correlation of multisite renewable generation and load. A mathematical program for energy scheduling is also developed using the projections of this envelope. We showcase the use and advantages of spatio-temporal flexibility requirement envelopes and their associated scheduling approach in a microgrid and on a modified IEEE Reliability Test System.</p>2019-05-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26852019-06-20T16:28:32-04:002019-06-26T10:07:45-04:00G-2019-34 : Present bias and the inefficiency of the centralized economy. The role of the elasticity of intertemporal substitutionFrancisco Cabo, Guiomar Martín-Herrán, María Pilar Martínez-García
<p>We analyze an endogenous growth model with non-constant discounting and a
negative externality of growth on utility. With a decreasing rate of
impatience, time-consistent agents anticipate the behavior of their future
selves and play a game against them. The strategic interaction between
subsequent central planners implies slower growth than the market solution,
where the externality is not internalized. Indeed, growth can be excessively
slow from a social welfare standpoint. Contrary to exponential discounting,
under non-constant discounting we prove that the market equilibrium is
Pareto-improving provided that the negative externality is sufficiently
small. Thus, policy interventions would only be meaningful when the
externality surpasses a given threshold. For a specific family of
non-constant discount functions we observe that the range for the intensity
of the externality compatible with a Pareto-improving market solution widens
with the elasticity of intertemporal substitution in consumption. Similarly,
this range also widens the more different from constant discounting time
preferences are, due either to a wider range of variation for the
instantaneous discount rates or because these decay more slowly.</p>2019-05-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26842019-06-12T09:52:01-04:002019-06-12T09:52:01-04:00G-2019-37 : Optimal dynamic management of a charityBertrand Crettez, Naila Hayek, Georges Zaccour
<p>Since nonprofit organizations play an important role in providing goods and
services in all countries, this paper aims at determining optimal policies
for a charity. The starting point of our analysis is that the amount of
donations received by a charity are function of its reputation, which is an
asset that can be built up over time, not overnight. To account for this
important aspect, we propose a dynamic model where the charity can allocate
its revenues to three main activities, namely, program expenses (charitable
projects), information (promotion of its causes, website, etc.) and
administration (worker/manager salaries and other administrative costs).
We assume that the donors are sensitive to the way in which the charity is
managed. If the administrative expenses are above a socially accepted
norm, then the charity's reputation suffers. The opposite occurs when the
charity is efficient and keeps its administrative costs below the norm.
We prove that depending on the parameter values, there exist different
optimal policies involving either positive or nil advertising and
administrative expenses. We discuss some policy implications for each case
and assess the impact of the norm on the results.</p>2019-06-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26832019-06-12T09:44:02-04:002019-06-12T09:44:02-04:00G-2019-32 : Graph colouring variationsAlain Hertz, Bernard Ries
<p>We consider three colouring problems which are variations of the basic vertex-colouring problem, and are motivated by applications from various domains. We give pointers to theoretical and algorithmic developments for each of these variations.</p>2019-05-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26822019-06-12T09:40:46-04:002019-06-12T09:42:00-04:00G-2019-33 : Are retailers' private labels always detrimental to national brand manufacturers? A differential game perspectiveAlessandra Buratto, Sihem Taboubi
<p>In this paper, we study the competition between national brands and private labels (or store brands) by analyzing the impacts of their presence on strategies, sales, and profits in a vertical channel structure. We use a differential game, where the channel members control price and non--price marketing variables, and investigate two scenarios. The first is used as a benchmark. It considers an exclusive retailer that distributes only a national brand provided by a manufacturer, who invests in national advertising to build its brand's goodwill. In the second scenario, the retailer owns a private label that competes with the national brand. By computing the results under both scenarios, we provide answers to the following research questions: (i) What should the price and the non--price marketing strategies be, with and without the private label? (ii) How do they compare? (iii) Is the presence of a private label always profitable for the retailer and harmful to the manufacturer, as the literature suggests? One of our main results indicates that the manufacturer is not necessarily always hurt by the private label. More specifically, we find that, when the local advertising competition between both brands is lower than the price competition, the national brand's manufacturer could charge a higher transfer price to a retailer carrying a private label. This result
brings into question an oft-cited explanation given in the literature, namely, that retailers choose to sell private labels as a counter-strategy to overcome double marginalization and to gain market power against the manufacturer.</p>2019-05-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26812019-06-06T10:37:48-04:002019-06-06T14:24:29-04:00G-2019-31 : Post-separation feature reductionJohn W. Chinneck
<p>Reducing the number of features used in data classification can remove noisy or redundant features, reduce the cost of data collection, and improve the accuracy of the classifier. This important step is normally conducted <em>before</em> a data separation (typically a hyperplane) is found. We reverse the process: a separating hyperplane is found first, and then <em>afterwards</em> a revised hyperplane is calculated that has fewer features, but that provides the same separation or better. The method relies on recent algorithms for finding sparse solutions for linear programs. Experiments show that the number of features in the separating hyperplane can be reduced substantially, while the overall accuracy often increases (but never reduces). This approach allows feature reduction to be easily incorporated into classifier decision tree construction using any algorithm for hyperplane selection. Separations that are substantially similar to the original separation but that use even fewer features can also be found. If the features have costs, the algorithm is easily extended to find separations that are at least as accurate as the original separation at much lower total cost.</p>2019-05-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/23782016-09-26T10:45:55-04:002019-06-28T11:35:09-04:00G-2016-69 : Linearized robust counterparts of two-stage robust optimization problems with applications in operations managementAmir Ardestani Jaafari, Erick Delage
<p>In this article, we discuss an alternative method for deriving conservative approximation models
for two-stage robust optimization problems. The method mainly relies on a linearization scheme employed
in bilinear programming, therefore we will say that it gives rise to the "linearized robust counterpart" models. We identify a close relation between this linearized robust counterpart model and the popular affinely
adjustable robust counterpart model. We also describe methods of modifying both types of models to make
these approximations less conservative. These methods are heavily inspired by the use of valid linear and
conic inequalities in the linearization process for bilinear models. We finally demonstrate how to employ
this new scheme in location-transportation and multi-item newsvendor problems to improve the numerical
efficiency and performance guarantees of robust optimization.</p>2016-09-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26802019-05-16T15:36:55-04:002019-05-16T15:39:51-04:00G-2019-29 : Heuristics for the dynamic facility location problem with modular capacitiesAllyson Fernandes da Costa Silva, Daniel Aloise, Leandro C. Coelho, Caroline Rocha
<p>This paper studies the Dynamic Facility Location Problem with Modular Capacities (DFLPM). We propose a linear relaxation based heuristic (LRH) and an evolutionary heuristic that hybridizes a genetic algorithm with a variable neighborhood descent (GA+VND) to solve it. The problem is a generalization of several facility location problems and consists in determining locations and sizes of facilities to minimize location and demand allocation costs with decisions taken periodically over a planning horizon. The DFLPM is solved using two heuristics tailored for different network configurations and cost structures. Experiments are reported comparing them to a state-of-the-art mixed integer programming (MIP) formulation for the problem from the literature solved by CPLEX. For the existing benchmark instances, the solution generated by LRH improved by VND finds solutions within 0.03% of the optimal ones in less than half of the computation time of the state-of-the-art method from the literature. In order to yield a better representation of real-life scenarios, we introduce a new set of instances for which GA+VND proved to be an effective approach to solve the problem, outperforming both CPLEX and LRH methods.</p>2019-04-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26792019-05-16T15:32:46-04:002019-05-16T15:40:31-04:00G-2019-28 : Convex fuzzy \(k\)-medoids clusteringDaniel Nobre Pinheiro, Daniel Aloise, Simon Blanchard
<p><code>\(K\)</code>-medoids clustering is among the most popular methods for cluster analysis, but it carries several assumptions about the nature of the latent clusters. In this paper, we introduce the Convex Fuzzy <em>k</em>-Medoids (CFKM) model, whose underlying formulation not only relaxes the assumption that objects must be assigned entirely to one and only one medoid, but also that medoids must be assigned entirely to one and only one cluster. Moreover, due to its convexity, CFKM resolution is completely robust to initialization.
We compare our model with two fuzzy <em>k</em>-medoids clustering models found in the literature: the Fuzzy <em>k</em>-Medoids (FKM) and the Fuzzy Clustering with Multi-Medoids (FMMdd), both solved approximately by heuristics because of their hard computational complexity. Our experiments in synthesized and real-world data sets reveal that our model can uniquely discover
important aspects of clustered data which are inherently fuzzy in nature, besides being more robust regarding the
hyperparameters of the fuzzy clustering task.</p>2019-04-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26782019-05-16T15:28:48-04:002019-05-16T15:29:49-04:00G-2019-27 : Implementing a smooth exact penalty function for constrained nonlinear optimizationRon Estrin, Michael P. Friedlander, Dominique Orban, Michael A. Saunders
<p>We build upon Estrin et al. (2019) to develop a general
constrained nonlinear optimization algorithm based on a smooth
penalty function proposed by
Fletcher (1970,1973b). Although Fletcher's approach has historically
been considered impractical, we show that the computational kernels required
are no more expensive than those in other widely accepted methods for nonlinear
optimization. The main kernel for evaluating the penalty function and its
derivatives solves structured linear systems. When the matrices are available
explicitly, we store a single factorization each iteration. Otherwise, we
obtain a factorization-free optimization algorithm by solving each linear system
iteratively. The penalty function shows promise in cases where the linear systems
can be solved efficiently, e.g., PDE-constrained optimization problems when
efficient preconditioners exist. We demonstrate the merits of the approach, and
give numerical results on several PDE-constrained and standard test problems.</p>2019-04-01GERADinfo@gerad.catag:www.gerad.ca,2005:Paper/26772019-04-26T16:52:28-04:002019-04-29T09:40:09-04:00G-2019-30 : Constrained stochastic blackbox optimization using probabilistic estimatesCharles Audet, Kwassi Joseph Dzahini, Sébastien Le Digabel, Michael Kokkolaras
<p>This work introduces Stoch-MADS, a stochastic variant of the mesh adaptive direct search (MADS) algorithm designed for deterministic blackbox optimization. Stoch-MADS considers the constrained optimization of an objective function <code>\(f\)</code> whose values can only be computed through a stochastic blackbox with some random noise of an unknown distribution. The proposed algorithm uses an algorithmic framework similar to that of MADS and utilizes random estimates of true function values obtained from their stochastic observations to ensure improvements since the exact deterministic computable version of <code>\(f\)</code> is not available. Such estimates are required to be accurate with a sufficiently large but fixed probability and satisfy a variance condition. The ability of the proposed algorithm to generate an asymptotically dense set of search directions is then exploited to show that it converges to a Clarke-hypertangent stationary point of <code>\(f\)</code> with probability one, with the help of martingale theory.</p>2019-04-01GERADinfo@gerad.ca