The Malitsky–Tam forward-reflected-backward method
Problem setup
Consider the monotone inclusion
where:
\(G_1 : \calH \to \calH\) is monotone and \(L\)-Lipschitz continuous, with \(L>0\),
\(G_2 : \calH \rightrightarrows \calH\) is \(\mu\)-strongly monotone and maximally monotone, with \(\mu>0\).
For an initial pair \((x^{-1},x^0)\in\calH^2\) and step size \(\gamma \in \reals_{++}\), the Malitsky–Tam forward-reflected-backward method [MT20] is given by
In this example, we fix \(\mu=1\), \(L=1\), sweep \(\gamma \in (0,1]\), and search for the smallest contraction factor \(\rho\in[0,1)\) provable using AutoLyap such that
where
Model the problem in AutoLyap and search for the smallest rho
\(G_1\) is modeled as an intersection of
MaximallyMonotoneandLipschitzOperator.\(G_2\) is modeled by
StronglyMonotone.The full inclusion is built with
InclusionProblem.The update rule is represented by
MalitskyTamFRB.Distance-to-solution parameters are obtained with
IterationIndependent.LinearConvergence.get_parameters_distance_to_solution.The contraction factor is searched with
IterationIndependent.LinearConvergence.bisection_search_rho.
Run the iteration-independent analysis
This example uses the MOSEK Fusion backend (backend="mosek_fusion").
Install the optional MOSEK dependency first:
pip install "autolyap[mosek]"
from autolyap import IterationIndependent, SolverOptions
from autolyap.algorithms import MalitskyTamFRB
from autolyap.problemclass import (
InclusionProblem,
LipschitzOperator,
MaximallyMonotone,
StronglyMonotone,
)
mu = 1.0
L = 1.0
gamma = 0.2
problem = InclusionProblem(
[
[MaximallyMonotone(), LipschitzOperator(L=L)], # G1
StronglyMonotone(mu=mu), # G2
]
)
algorithm = MalitskyTamFRB(gamma=gamma)
solver_options = SolverOptions(backend="mosek_fusion")
# License-free option:
# solver_options = SolverOptions(backend="cvxpy", cvxpy_solver="CLARABEL")
P, T = IterationIndependent.LinearConvergence.get_parameters_distance_to_solution(
algorithm
)
result = IterationIndependent.LinearConvergence.bisection_search_rho(
problem,
algorithm,
P,
T,
S_equals_T=True,
s_equals_t=True,
remove_C3=True,
solver_options=solver_options,
)
if result["status"] != "feasible":
raise RuntimeError("No feasible Lyapunov certificate in the requested rho interval.")
rho_autolyap = result["rho"]
if not (0.0 < gamma < 1.0 / (2.0 * L)):
raise ValueError("Theorem 2.9 rate expression requires 0 < gamma < 1/(2L).")
epsilon = min(0.5 - gamma * L, 5.0 * mu * gamma)
alpha = min(1.0 + 4.0 * mu * gamma - 0.75 * epsilon, 1.0 + 0.5 * epsilon)
rho_theory = 1.0 / alpha
print(f"rho (AutoLyap): {rho_autolyap:.8f}")
print(f"rho (theory): {rho_theory:.8f}")
What to inspect in result:
result["status"]:feasible,infeasible, ornot_solved.result["solve_status"]: raw backend status.result["rho"]: certified contraction factor when feasible.result["certificate"]: Lyapunov certificate matrices/scalars.
The computed value rho (AutoLyap) is compared against the rate
expression from
[MT20, Theorem 2.9].
Under the step-size condition \(0<\gamma<1/(2L)\), define
and set
Then the theoretical bound is
where \(x^\star \in \zer(G_1+G_2)\).
Sweeping over 100 values of \(\gamma\) on \((0,1]\) gives the plot below, with the Theorem 2.9-derived expression (on \(0<\gamma<1/(2L)\)) in black and AutoLyap certificates as blue dots.
References
Yura Malitsky and Matthew K. Tam. A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM Journal on Optimization, 30(2):1451–1472, January 2020. doi:10.1137/18m1207260.