Quadratic programming and quadratically constrained quadratic programming: theory, algorithms, and applications
DOI:
https://doi.org/10.56764/hpu2.jos.2025.4.03.80-96Abstract
This survey provides a systematic review of quadratic programming (QP) and quadratically constrained quadratic programming (QCQP) problems. The paper reviews mathematical formulations and problem taxonomies based on convexity properties, surveys optimality conditions through Lagrangian theory and KKT conditions, and examines foundational work by Markowitz, Wolfe, and Frank-Wolfe and key algorithmic developments. Four major algorithmic paradigms are examined: (1) active-set methods with finite convergence properties, (2) polynomial-time interior-point methods, (3) modern operator-splitting approaches including OSQP and ADMM, and (4) semidefinite programming relaxations for non-convex cases, with review of their theoretical properties and convergence guarantees. The methodology is illustrated through case studies in facility location optimization and production planning that demonstrate the application of KKT conditions and Lagrange multiplier theory, while examples from portfolio optimization to model predictive control illustrate broader applicability. This work connects classical optimization theory with contemporary algorithmic approaches, providing insights for researchers and guidance for practitioners in operations research, engineering, and applied mathematics.
References
[1] H. Markowitz, “Portfolio selection,” The Journal of Finance, vol. 7, no. 1, pp. 77–91, Mar. 1952, doi: 10.1111/j.1540-6261.1952.tb01525.x.
[2] P. Wolfe, “The simplex method for quadratic programming,” Econometrica, vol. 27, no. 3, pp. 382–398, 1959, doi: 10.2307/1909468.
[3] M. Frank and P. Wolfe, “An algorithm for quadratic programming,” Naval Research Logistics Quarterly, vol. 3, no. 1-2, pp. 95–110, 1956, doi: 10.1002/nav.3800030109.
[4] J. Platt, “Fast training of support vector machines using sequential minimal optimization,” in Advances in Kernel Methods - Support Vector Learning, B. Schölkopf, C. Burges, and A. Smola, Eds. MIT Press, pp. 185–208, 1999, doi: 10.7551/mitpress/1130.003.0016.
[5] V. Vapnik, Statistical Learning Theory. New York: Wiley, 1998.
[6] D. Q. Mayne and J. B. Rawlings, “Correction to “Constrained model predictive control: stability and optimality,’” Automatica, vol. 37, no. 3, p. 483, Mar. 2001, doi: 10.1016/s0005-1098(00)00173-4.
[7] Y. Wang and S. Boyd, “Fast model predictive control using online optimization,” IEEE Transactions on Control Systems Technology, vol. 18, no. 2, pp. 267–278, 2010, doi: 10.1109/TCST.2009.2017934.
[8] P. E. Gill and W. Murray, “Numerically stable methods for quadratic programming,” Mathematical Programming, vol. 14, no. 1, pp. 349–372, Dec. 1978, doi: 10.1007/bf01588976.
[9] D. Goldfarb and A. Idnani, “A numerically stable dual method for solving strictly convex quadratic programs,” Mathematical Programming, vol. 27, no. 1, pp. 1–33, Sep. 1983, doi: 10.1007/bf02591962.
[10] N. Karmarkar, “A new polynomial-time algorithm for linear programming,” Combinatorica, vol. 4, no. 4, pp. 373-395, 1984, doi: 10.1007/BF02579150.
[11] N. Megiddo, “Pathways to the optimal set in linear programming,” in Progress in Mathematical Programming, pp. 131-158, Springer, 1989, doi: 10.1007/978-1-4613-9617-8_8.
[12] S. Mehrotra, “On the implementation of a primal-dual interior point method,” SIAM Journal on Optimization, vol. 2, no. 4, pp. 575–601, Nov. 1992, doi: 10.1137/0802028.
[13] B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd, “OSQP: an operator splitting solver for quadratic programs,” Mathematical Programming Computation, vol. 12, no. 4, pp. 637–672, Feb. 2020, doi: 10.1007/s12532-020-00179-2.
[14] H. J. Ferreau, C. Kirches, A. Potschka, H. G. Bock, and M. Diehl, “qpOASES: a parametric active-set algorithm for quadratic programming,” Mathematical Programming Computation, vol. 6, no. 4, pp. 327–363, 2014, doi: 10.1007/s12532-014-0071-1.
[15] Gurobi Optimization, LLC, “Gurobi Optimizer Reference Manual,” 2023. [Online]. Available: https://www.gurobi.com
[16] IBM ILOG CPLEX, “CPLEX Optimization Studio,” IBM Corporation, 2023. [Online]. Available: https://www.ibm.com/products/ilog-cplex-optimization-studio.
[17] MOSEK ApS, “The MOSEK optimization toolbox for MATLAB manual,” Version 10.0, 2023. [Online]. Available: https://docs.mosek.com.
[18] N. Z. Shor, “Class of global minimum bounds of polynomial functions,” Cybernetics, vol. 23, no. 6, pp. 731–734, 1987, doi: 10.1007/BF01070233.
[19] V. A. Yakubovich, “S-procedure in nonlinear control theory,” Vestnik Leningrad University, vol. 1, pp. 62–77, 1971.
[20] Z. Q. Luo, W. K. Ma, A. M. C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” IEEE Signal Processing Magazine, vol. 27, no. 3, pp. 20–34, 2010, doi: 10.1109/MSP.2010.936019.
[21] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed. New York: Springer, 2006.
[22] G. H. Golub and C. F. Van Loan, Matrix Computations, 4th ed. Baltimore: Johns Hopkins University Press, 2013.
[23] S. J. Wright, Primal-Dual Interior-Point Methods. Philadelphia: SIAM, 1997, doi: 10.1137/1.9781611971453.
[24] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011, doi: 10.1561/2200000016.
[25] E. A. Yildirim and S. J. Wright, “Warm-start strategies in interior-point methods for linear programming,” SIAM Journal on Optimization, vol. 12, no. 3, pp. 782–810, Jan. 2002, doi: 10.1137/s1052623400369235.
[26] S. Kim and M. Kojima, “Exact solutions of some nonconvex quadratic optimization problems via SDP and SOCP relaxations,” Computational Optimization and Applications, vol. 26, no. 2, pp. 143–154, 2003, doi: 10.1023/A:1025794313696.
[27] T. A. Davis, Direct Methods for Sparse Linear Systems. Philadelphia: SIAM, 2006, doi: 10.1137/1.9780898718881.
[28] S. Diamond and S. Boyd, “CVXPY: A Python-embedded modeling language for convex optimization,” Journal of Machine Learning Research, vol. 17, paper 83, pp. 1–5, 2016.
[29] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski, Robust Optimization. Princeton: Princeton University Press, 2009, doi: 10.1515/9781400831050.
[30] A. Agrawal, B. Amos, S. Barratt, S. Boyd, S. Diamond, and J. Z. Kolter, “Differentiable convex optimization layers,” in Advances in Neural Information Processing Systems, vol. 32, 2019.
[31] S. Caron, A. Zaki, P. Otta, D. Arnström, J. Carpentier, F. Yang, and P.-A. Leziart, “qpbenchmark: Benchmark for quadratic programming solvers available in Python,” version 2.5.0, 2025, https://github.com/qpsolvers/qpbenchmark.
[32] M. Schubiger, G. Banjac, and J. Lygeros, “GPU acceleration of ADMM for large-scale quadratic programming,” Journal of Parallel and Distributed Computing, vol. 144, pp. 55–67, 2020. doi: 10.1016/j.jpdc.2020.04.008.
[33] S. Cipolla and J. Gondzio, “Proximal stabilized interior point methods and low-frequency-update preconditioning techniques,"”Journal of Optimization Theory and Applications, vol. 197, no. 3, pp. 1061–1103, Apr. 2023, doi: 10.1007/s10957-023-02194-4.
[34] H. D. Mittelmann, “Decision tree for optimization software,” Arizona State University, 2024. [Online]. Available: https://plato.asu.edu/guide.html.
[35] A. Leulmi, R. Ziadi, C. Souli, M. A. Saleh, and A. Z. Almaymuni, “An interior point algorithm for quadratic programming based on a new step-length,” International Journal of Analysis and Applications, vol. 22, article 233, 2024, doi: 10.28924/2291-8639-22-2024-233.
Downloads
Published
How to Cite
Volume and Issue
Section
Copyright and License
Copyright (c) 2025 Kim-Thuy Dinh Thi

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.





