The Douglas–Rachford method: Smooth and strongly convex + convex
Problem setup
Consider the composite minimization problem
where:
\(f_1 : \calH \to \reals\) is \(\mu\)-strongly convex and \(L\)-smooth, with \(0 < \mu < L\),
\(f_2 : \calH \to \reals \cup \set{\pm\infty}\) is proper, convex, and lower semicontinuous.
Equivalently, we solve
For an initial point \(x^0 \in \calH\), step size \(\gamma \in \reals_{++}\), and relaxation \(\lambda \in \reals\), the Douglas–Rachford update is
In this example, we fix \(\mu=1\), \(L=2\), \(\lambda=1\), sweep \(\gamma \in (0,5]\), and search for the smallest contraction factor \(\rho\in[0,1)\) provable using AutoLyap such that
where
Model the problem in AutoLyap and search for the smallest rho
\(f_1\) is modeled by
SmoothStronglyConvex.\(f_2\) is modeled by
Convex.The full inclusion is built with
InclusionProblem.The update rule is represented by
DouglasRachfordwithtype="function".Distance-to-solution parameters are obtained with
IterationIndependent.LinearConvergence.get_parameters_distance_to_solution.The contraction factor is searched with
IterationIndependent.LinearConvergence.bisection_search_rho.
Run the iteration-independent analysis
This example uses the MOSEK Fusion backend (backend="mosek_fusion").
Install the optional MOSEK dependency first:
pip install "autolyap[mosek]"
import numpy as np
from autolyap import IterationIndependent, SolverOptions
from autolyap.algorithms import DouglasRachford
from autolyap.problemclass import Convex, InclusionProblem, SmoothStronglyConvex
mu = 1.0
L = 2.0
gamma = 1.0
lambda_value = 1.0
problem = InclusionProblem(
[
SmoothStronglyConvex(mu=mu, L=L), # f1
Convex(), # f2
]
)
algorithm = DouglasRachford(
gamma=gamma,
lambda_value=lambda_value,
type="function",
)
solver_options = SolverOptions(backend="mosek_fusion")
# License-free option:
# solver_options = SolverOptions(backend="cvxpy", cvxpy_solver="CLARABEL")
P, p, T, t = IterationIndependent.LinearConvergence.get_parameters_distance_to_solution(
algorithm
)
result = IterationIndependent.LinearConvergence.bisection_search_rho(
problem,
algorithm,
P,
T,
p=p,
t=t,
S_equals_T=True,
s_equals_t=True,
remove_C3=True,
tol=1e-3,
solver_options=solver_options,
)
if result["status"] != "feasible":
raise RuntimeError("No feasible Lyapunov certificate in the requested rho interval.")
rho_autolyap = result["rho"]
alpha = lambda_value / 2.0
delta = max(
(gamma * L - 1.0) / (gamma * L + 1.0),
(1.0 - gamma * mu) / (1.0 + gamma * mu),
)
rho_theory = (abs(1.0 - alpha) + alpha * delta) ** 2
print(f"rho (AutoLyap): {rho_autolyap:.8f}")
print(f"rho (theory): {rho_theory:.8f}")
What to inspect in result:
result["status"]:feasible,infeasible, ornot_solved.result["solve_status"]: raw backend status.result["rho"]: certified contraction factor when feasible.result["certificate"]: Lyapunov certificate matrices/scalars.
The computed value rho (AutoLyap) matches (up to solver numerical tolerances)
the rate expression in
[GB17, Theorem 2] for this setting, i.e.,
Sweeping over 100 values of \(\gamma\) on \(0 < \gamma \le 5\) (with \(\mu=1\), \(L=2\), \(\lambda=1\)) gives the plot below, with the theoretical rate in black and AutoLyap certificates as blue dots.
References
Pontus Giselsson and Stephen Boyd. Linear convergence and metric selection for Douglas–Rachford splitting and admm. IEEE Transactions on Automatic Control, 62(2):532–544, February 2017. doi:10.1109/tac.2016.2564160.